doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sklearn.preprocessing.OrdinalEncoder
class sklearn.preprocessing.OrdinalEncoder(*, categories='auto', dtype=<class 'numpy.float64'>, handle_unknown='error', unknown_value=None) [source]
Encode categorical features as an integer array. The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are converted to ordinal integers. This results in a single column of integers (0 to n_categories - 1) per feature. Read more in the User Guide. New in version 0.20. Parameters
categories‘auto’ or a list of array-like, default=’auto’
Categories (unique values) per feature: ‘auto’ : Determine categories automatically from the training data. list : categories[i] holds the categories expected in the ith column. The passed categories should not mix strings and numeric values, and should be sorted in case of numeric values. The used categories can be found in the categories_ attribute.
dtypenumber type, default np.float64
Desired dtype of output.
handle_unknown{‘error’, ‘use_encoded_value’}, default=’error’
When set to ‘error’ an error will be raised in case an unknown categorical feature is present during transform. When set to ‘use_encoded_value’, the encoded value of unknown categories will be set to the value given for the parameter unknown_value. In inverse_transform, an unknown category will be denoted as None. New in version 0.24.
unknown_valueint or np.nan, default=None
When the parameter handle_unknown is set to ‘use_encoded_value’, this parameter is required and will set the encoded value of unknown categories. It has to be distinct from the values used to encode any of the categories in fit. If set to np.nan, the dtype parameter must be a float dtype. New in version 0.24. Attributes
categories_list of arrays
The categories of each feature determined during fit (in order of the features in X and corresponding with the output of transform). This does not include categories that weren’t seen during fit. See also
OneHotEncoder
Performs a one-hot encoding of categorical features.
LabelEncoder
Encodes target labels with values between 0 and n_classes-1. Examples Given a dataset with two features, we let the encoder find the unique values per feature and transform the data to an ordinal encoding. >>> from sklearn.preprocessing import OrdinalEncoder
>>> enc = OrdinalEncoder()
>>> X = [['Male', 1], ['Female', 3], ['Female', 2]]
>>> enc.fit(X)
OrdinalEncoder()
>>> enc.categories_
[array(['Female', 'Male'], dtype=object), array([1, 2, 3], dtype=object)]
>>> enc.transform([['Female', 3], ['Male', 1]])
array([[0., 2.],
[1., 0.]])
>>> enc.inverse_transform([[1, 0], [0, 1]])
array([['Male', 1],
['Female', 2]], dtype=object)
Methods
fit(X[, y]) Fit the OrdinalEncoder to X.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Convert the data back to the original representation.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform X to ordinal codes.
fit(X, y=None) [source]
Fit the OrdinalEncoder to X. Parameters
Xarray-like, shape [n_samples, n_features]
The data to determine the categories of each feature.
yNone
Ignored. This parameter exists only for compatibility with Pipeline. Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Convert the data back to the original representation. Parameters
Xarray-like or sparse matrix, shape [n_samples, n_encoded_features]
The transformed data. Returns
X_trarray-like, shape [n_samples, n_features]
Inverse transformed array.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform X to ordinal codes. Parameters
Xarray-like, shape [n_samples, n_features]
The data to encode. Returns
X_outsparse matrix or a 2-d array
Transformed input.
Examples using sklearn.preprocessing.OrdinalEncoder
Categorical Feature Support in Gradient Boosting
Combine predictors using stacking
Poisson regression and non-normal loss | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder |
fit(X, y=None) [source]
Fit the OrdinalEncoder to X. Parameters
Xarray-like, shape [n_samples, n_features]
The data to determine the categories of each feature.
yNone
Ignored. This parameter exists only for compatibility with Pipeline. Returns
self | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder.get_params |
inverse_transform(X) [source]
Convert the data back to the original representation. Parameters
Xarray-like or sparse matrix, shape [n_samples, n_encoded_features]
The transformed data. Returns
X_trarray-like, shape [n_samples, n_features]
Inverse transformed array. | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder.set_params |
transform(X) [source]
Transform X to ordinal codes. Parameters
Xarray-like, shape [n_samples, n_features]
The data to encode. Returns
X_outsparse matrix or a 2-d array
Transformed input. | sklearn.modules.generated.sklearn.preprocessing.ordinalencoder#sklearn.preprocessing.OrdinalEncoder.transform |
class sklearn.preprocessing.PolynomialFeatures(degree=2, *, interaction_only=False, include_bias=True, order='C') [source]
Generate polynomial and interaction features. Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2]. Parameters
degreeint, default=2
The degree of the polynomial features.
interaction_onlybool, default=False
If true, only interaction features are produced: features that are products of at most degree distinct input features (so not x[1] ** 2, x[0] * x[2] ** 3, etc.).
include_biasbool, default=True
If True (default), then include a bias column, the feature in which all polynomial powers are zero (i.e. a column of ones - acts as an intercept term in a linear model).
order{‘C’, ‘F’}, default=’C’
Order of output array in the dense case. ‘F’ order is faster to compute, but may slow down subsequent estimators. New in version 0.21. Attributes
powers_ndarray of shape (n_output_features, n_input_features)
powers_[i, j] is the exponent of the jth input in the ith output.
n_input_features_int
The total number of input features.
n_output_features_int
The total number of polynomial output features. The number of output features is computed by iterating over all suitably sized combinations of input features. Notes Be aware that the number of features in the output array scales polynomially in the number of features of the input array, and exponentially in the degree. High degrees can cause overfitting. See examples/linear_model/plot_polynomial_interpolation.py Examples >>> import numpy as np
>>> from sklearn.preprocessing import PolynomialFeatures
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = PolynomialFeatures(2)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0., 0., 1.],
[ 1., 2., 3., 4., 6., 9.],
[ 1., 4., 5., 16., 20., 25.]])
>>> poly = PolynomialFeatures(interaction_only=True)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0.],
[ 1., 2., 3., 6.],
[ 1., 4., 5., 20.]])
Methods
fit(X[, y]) Compute number of output features.
fit_transform(X[, y]) Fit to data, then transform it.
get_feature_names([input_features]) Return feature names for output features
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform data to polynomial features
fit(X, y=None) [source]
Compute number of output features. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data.
yNone
Ignored. Returns
selfobject
Fitted transformer.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_feature_names(input_features=None) [source]
Return feature names for output features Parameters
input_featureslist of str of shape (n_features,), default=None
String names for input features if available. By default, “x0”, “x1”, … “xn_features” is used. Returns
output_feature_nameslist of str of shape (n_output_features,)
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform data to polynomial features Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to transform, row by row. Prefer CSR over CSC for sparse input (for speed), but CSC is required if the degree is 4 or higher. If the degree is less than 4 and the input format is CSC, it will be converted to CSR, have its polynomial features generated, then converted back to CSC. If the degree is 2 or 3, the method described in “Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using K-Simplex Numbers” by Andrew Nystrom and John Hughes is used, which is much faster than the method used on CSC input. For this reason, a CSC input will be converted to CSR, and the output will be converted back to CSC prior to being returned, hence the preference of CSR. Returns
XP{ndarray, sparse matrix} of shape (n_samples, NP)
The matrix of features, where NP is the number of polynomial features generated from the combination of inputs. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures |
sklearn.preprocessing.PolynomialFeatures
class sklearn.preprocessing.PolynomialFeatures(degree=2, *, interaction_only=False, include_bias=True, order='C') [source]
Generate polynomial and interaction features. Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2]. Parameters
degreeint, default=2
The degree of the polynomial features.
interaction_onlybool, default=False
If true, only interaction features are produced: features that are products of at most degree distinct input features (so not x[1] ** 2, x[0] * x[2] ** 3, etc.).
include_biasbool, default=True
If True (default), then include a bias column, the feature in which all polynomial powers are zero (i.e. a column of ones - acts as an intercept term in a linear model).
order{‘C’, ‘F’}, default=’C’
Order of output array in the dense case. ‘F’ order is faster to compute, but may slow down subsequent estimators. New in version 0.21. Attributes
powers_ndarray of shape (n_output_features, n_input_features)
powers_[i, j] is the exponent of the jth input in the ith output.
n_input_features_int
The total number of input features.
n_output_features_int
The total number of polynomial output features. The number of output features is computed by iterating over all suitably sized combinations of input features. Notes Be aware that the number of features in the output array scales polynomially in the number of features of the input array, and exponentially in the degree. High degrees can cause overfitting. See examples/linear_model/plot_polynomial_interpolation.py Examples >>> import numpy as np
>>> from sklearn.preprocessing import PolynomialFeatures
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = PolynomialFeatures(2)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0., 0., 1.],
[ 1., 2., 3., 4., 6., 9.],
[ 1., 4., 5., 16., 20., 25.]])
>>> poly = PolynomialFeatures(interaction_only=True)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0.],
[ 1., 2., 3., 6.],
[ 1., 4., 5., 20.]])
Methods
fit(X[, y]) Compute number of output features.
fit_transform(X[, y]) Fit to data, then transform it.
get_feature_names([input_features]) Return feature names for output features
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Transform data to polynomial features
fit(X, y=None) [source]
Compute number of output features. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data.
yNone
Ignored. Returns
selfobject
Fitted transformer.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_feature_names(input_features=None) [source]
Return feature names for output features Parameters
input_featureslist of str of shape (n_features,), default=None
String names for input features if available. By default, “x0”, “x1”, … “xn_features” is used. Returns
output_feature_nameslist of str of shape (n_output_features,)
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Transform data to polynomial features Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to transform, row by row. Prefer CSR over CSC for sparse input (for speed), but CSC is required if the degree is 4 or higher. If the degree is less than 4 and the input format is CSC, it will be converted to CSR, have its polynomial features generated, then converted back to CSC. If the degree is 2 or 3, the method described in “Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using K-Simplex Numbers” by Andrew Nystrom and John Hughes is used, which is much faster than the method used on CSC input. For this reason, a CSC input will be converted to CSR, and the output will be converted back to CSC prior to being returned, hence the preference of CSR. Returns
XP{ndarray, sparse matrix} of shape (n_samples, NP)
The matrix of features, where NP is the number of polynomial features generated from the combination of inputs. If a sparse matrix is provided, it will be converted into a sparse csr_matrix.
Examples using sklearn.preprocessing.PolynomialFeatures
Release Highlights for scikit-learn 0.24
Polynomial interpolation
Robust linear estimator fitting
Poisson regression and non-normal loss
Underfitting vs. Overfitting | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures |
fit(X, y=None) [source]
Compute number of output features. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data.
yNone
Ignored. Returns
selfobject
Fitted transformer. | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures.fit_transform |
get_feature_names(input_features=None) [source]
Return feature names for output features Parameters
input_featureslist of str of shape (n_features,), default=None
String names for input features if available. By default, “x0”, “x1”, … “xn_features” is used. Returns
output_feature_nameslist of str of shape (n_output_features,) | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures.get_feature_names |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures.set_params |
transform(X) [source]
Transform data to polynomial features Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to transform, row by row. Prefer CSR over CSC for sparse input (for speed), but CSC is required if the degree is 4 or higher. If the degree is less than 4 and the input format is CSC, it will be converted to CSR, have its polynomial features generated, then converted back to CSC. If the degree is 2 or 3, the method described in “Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using K-Simplex Numbers” by Andrew Nystrom and John Hughes is used, which is much faster than the method used on CSC input. For this reason, a CSC input will be converted to CSR, and the output will be converted back to CSC prior to being returned, hence the preference of CSR. Returns
XP{ndarray, sparse matrix} of shape (n_samples, NP)
The matrix of features, where NP is the number of polynomial features generated from the combination of inputs. If a sparse matrix is provided, it will be converted into a sparse csr_matrix. | sklearn.modules.generated.sklearn.preprocessing.polynomialfeatures#sklearn.preprocessing.PolynomialFeatures.transform |
class sklearn.preprocessing.PowerTransformer(method='yeo-johnson', *, standardize=True, copy=True) [source]
Apply a power transform featurewise to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Read more in the User Guide. New in version 0.20. Parameters
method{‘yeo-johnson’, ‘box-cox’}, default=’yeo-johnson’
The power transform method. Available methods are: ‘yeo-johnson’ [1], works with positive and negative values ‘box-cox’ [2], only works with strictly positive values
standardizebool, default=True
Set to True to apply zero-mean, unit-variance normalization to the transformed output.
copybool, default=True
Set to False to perform inplace computation during transformation. Attributes
lambdas_ndarray of float of shape (n_features,)
The parameters of the power transformation for the selected features. See also
power_transform
Equivalent function without the estimator API.
QuantileTransformer
Maps data to a standard normal distribution with the parameter output_distribution='normal'. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. References
1
I.K. Yeo and R.A. Johnson, “A new family of power transformations to improve normality or symmetry.” Biometrika, 87(4), pp.954-959, (2000).
2
G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211-252 (1964). Examples >>> import numpy as np
>>> from sklearn.preprocessing import PowerTransformer
>>> pt = PowerTransformer()
>>> data = [[1, 2], [3, 2], [4, 5]]
>>> print(pt.fit(data))
PowerTransformer()
>>> print(pt.lambdas_)
[ 1.386... -3.100...]
>>> print(pt.transform(data))
[[-1.316... -0.707...]
[ 0.209... -0.707...]
[ 1.106... 1.414...]]
Methods
fit(X[, y]) Estimate the optimal parameter lambda for each feature.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Apply the inverse power transformation using the fitted lambdas.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply the power transform to each feature using the fitted lambdas.
fit(X, y=None) [source]
Estimate the optimal parameter lambda for each feature. The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood. Parameters
Xarray-like of shape (n_samples, n_features)
The data used to estimate the optimal transformation parameters.
yNone
Ignored. Returns
selfobject
Fitted transformer.
fit_transform(X, y=None) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Apply the inverse power transformation using the fitted lambdas. The inverse of the Box-Cox transformation is given by: if lambda_ == 0:
X = exp(X_trans)
else:
X = (X_trans * lambda_ + 1) ** (1 / lambda_)
The inverse of the Yeo-Johnson transformation is given by: if X >= 0 and lambda_ == 0:
X = exp(X_trans) - 1
elif X >= 0 and lambda_ != 0:
X = (X_trans * lambda_ + 1) ** (1 / lambda_) - 1
elif X < 0 and lambda_ != 2:
X = 1 - (-(2 - lambda_) * X_trans + 1) ** (1 / (2 - lambda_))
elif X < 0 and lambda_ == 2:
X = 1 - exp(-X_trans)
Parameters
Xarray-like of shape (n_samples, n_features)
The transformed data. Returns
Xndarray of shape (n_samples, n_features)
The original data.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply the power transform to each feature using the fitted lambdas. Parameters
Xarray-like of shape (n_samples, n_features)
The data to be transformed using a power transformation. Returns
X_transndarray of shape (n_samples, n_features)
The transformed data. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer |
sklearn.preprocessing.PowerTransformer
class sklearn.preprocessing.PowerTransformer(method='yeo-johnson', *, standardize=True, copy=True) [source]
Apply a power transform featurewise to make data more Gaussian-like. Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, PowerTransformer supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Read more in the User Guide. New in version 0.20. Parameters
method{‘yeo-johnson’, ‘box-cox’}, default=’yeo-johnson’
The power transform method. Available methods are: ‘yeo-johnson’ [1], works with positive and negative values ‘box-cox’ [2], only works with strictly positive values
standardizebool, default=True
Set to True to apply zero-mean, unit-variance normalization to the transformed output.
copybool, default=True
Set to False to perform inplace computation during transformation. Attributes
lambdas_ndarray of float of shape (n_features,)
The parameters of the power transformation for the selected features. See also
power_transform
Equivalent function without the estimator API.
QuantileTransformer
Maps data to a standard normal distribution with the parameter output_distribution='normal'. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. References
1
I.K. Yeo and R.A. Johnson, “A new family of power transformations to improve normality or symmetry.” Biometrika, 87(4), pp.954-959, (2000).
2
G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211-252 (1964). Examples >>> import numpy as np
>>> from sklearn.preprocessing import PowerTransformer
>>> pt = PowerTransformer()
>>> data = [[1, 2], [3, 2], [4, 5]]
>>> print(pt.fit(data))
PowerTransformer()
>>> print(pt.lambdas_)
[ 1.386... -3.100...]
>>> print(pt.transform(data))
[[-1.316... -0.707...]
[ 0.209... -0.707...]
[ 1.106... 1.414...]]
Methods
fit(X[, y]) Estimate the optimal parameter lambda for each feature.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Apply the inverse power transformation using the fitted lambdas.
set_params(**params) Set the parameters of this estimator.
transform(X) Apply the power transform to each feature using the fitted lambdas.
fit(X, y=None) [source]
Estimate the optimal parameter lambda for each feature. The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood. Parameters
Xarray-like of shape (n_samples, n_features)
The data used to estimate the optimal transformation parameters.
yNone
Ignored. Returns
selfobject
Fitted transformer.
fit_transform(X, y=None) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Apply the inverse power transformation using the fitted lambdas. The inverse of the Box-Cox transformation is given by: if lambda_ == 0:
X = exp(X_trans)
else:
X = (X_trans * lambda_ + 1) ** (1 / lambda_)
The inverse of the Yeo-Johnson transformation is given by: if X >= 0 and lambda_ == 0:
X = exp(X_trans) - 1
elif X >= 0 and lambda_ != 0:
X = (X_trans * lambda_ + 1) ** (1 / lambda_) - 1
elif X < 0 and lambda_ != 2:
X = 1 - (-(2 - lambda_) * X_trans + 1) ** (1 / (2 - lambda_))
elif X < 0 and lambda_ == 2:
X = 1 - exp(-X_trans)
Parameters
Xarray-like of shape (n_samples, n_features)
The transformed data. Returns
Xndarray of shape (n_samples, n_features)
The original data.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Apply the power transform to each feature using the fitted lambdas. Parameters
Xarray-like of shape (n_samples, n_features)
The data to be transformed using a power transformation. Returns
X_transndarray of shape (n_samples, n_features)
The transformed data.
Examples using sklearn.preprocessing.PowerTransformer
Map data to a normal distribution
Compare the effect of different scalers on data with outliers | sklearn.modules.generated.sklearn.preprocessing.powertransformer |
fit(X, y=None) [source]
Estimate the optimal parameter lambda for each feature. The optimal lambda parameter for minimizing skewness is estimated on each feature independently using maximum likelihood. Parameters
Xarray-like of shape (n_samples, n_features)
The data used to estimate the optimal transformation parameters.
yNone
Ignored. Returns
selfobject
Fitted transformer. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer.fit |
fit_transform(X, y=None) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer.get_params |
inverse_transform(X) [source]
Apply the inverse power transformation using the fitted lambdas. The inverse of the Box-Cox transformation is given by: if lambda_ == 0:
X = exp(X_trans)
else:
X = (X_trans * lambda_ + 1) ** (1 / lambda_)
The inverse of the Yeo-Johnson transformation is given by: if X >= 0 and lambda_ == 0:
X = exp(X_trans) - 1
elif X >= 0 and lambda_ != 0:
X = (X_trans * lambda_ + 1) ** (1 / lambda_) - 1
elif X < 0 and lambda_ != 2:
X = 1 - (-(2 - lambda_) * X_trans + 1) ** (1 / (2 - lambda_))
elif X < 0 and lambda_ == 2:
X = 1 - exp(-X_trans)
Parameters
Xarray-like of shape (n_samples, n_features)
The transformed data. Returns
Xndarray of shape (n_samples, n_features)
The original data. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer.set_params |
transform(X) [source]
Apply the power transform to each feature using the fitted lambdas. Parameters
Xarray-like of shape (n_samples, n_features)
The data to be transformed using a power transformation. Returns
X_transndarray of shape (n_samples, n_features)
The transformed data. | sklearn.modules.generated.sklearn.preprocessing.powertransformer#sklearn.preprocessing.PowerTransformer.transform |
sklearn.preprocessing.power_transform(X, method='yeo-johnson', *, standardize=True, copy=True) [source]
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, power_transform supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
The data to be transformed using a power transformation.
method{‘yeo-johnson’, ‘box-cox’}, default=’yeo-johnson’
The power transform method. Available methods are: ‘yeo-johnson’ [1], works with positive and negative values ‘box-cox’ [2], only works with strictly positive values Changed in version 0.23: The default value of the method parameter changed from ‘box-cox’ to ‘yeo-johnson’ in 0.23.
standardizebool, default=True
Set to True to apply zero-mean, unit-variance normalization to the transformed output.
copybool, default=True
Set to False to perform inplace computation during transformation. Returns
X_transndarray of shape (n_samples, n_features)
The transformed data. See also
PowerTransformer
Equivalent transformation with the Transformer API (e.g. as part of a preprocessing Pipeline).
quantile_transform
Maps data to a standard normal distribution with the parameter output_distribution='normal'. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. References
1
I.K. Yeo and R.A. Johnson, “A new family of power transformations to improve normality or symmetry.” Biometrika, 87(4), pp.954-959, (2000).
2
G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211-252 (1964). Examples >>> import numpy as np
>>> from sklearn.preprocessing import power_transform
>>> data = [[1, 2], [3, 2], [4, 5]]
>>> print(power_transform(data, method='box-cox'))
[[-1.332... -0.707...]
[ 0.256... -0.707...]
[ 1.076... 1.414...]]
Warning Risk of data leak. Do not use power_transform unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using PowerTransformer within a Pipeline in order to prevent most risks of data leaking, e.g.: pipe = make_pipeline(PowerTransformer(),
LogisticRegression()). | sklearn.modules.generated.sklearn.preprocessing.power_transform#sklearn.preprocessing.power_transform |
class sklearn.preprocessing.QuantileTransformer(*, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=100000, random_state=None, copy=True) [source]
Transform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Read more in the User Guide. New in version 0.19. Parameters
n_quantilesint, default=1000 or n_samples
Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator.
output_distribution{‘uniform’, ‘normal’}, default=’uniform’
Marginal distribution for the transformed data. The choices are ‘uniform’ (default) or ‘normal’.
ignore_implicit_zerosbool, default=False
Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros.
subsampleint, default=1e5
Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices.
random_stateint, RandomState instance or None, default=None
Determines random number generation for subsampling and smoothing noise. Please see subsample for more details. Pass an int for reproducible results across multiple function calls. See Glossary
copybool, default=True
Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). Attributes
n_quantiles_int
The actual number of quantiles used to discretize the cumulative distribution function.
quantiles_ndarray of shape (n_quantiles, n_features)
The values corresponding the quantiles of reference.
references_ndarray of shape (n_quantiles, )
Quantiles of references. See also
quantile_transform
Equivalent function without the estimator API.
PowerTransformer
Perform mapping to a normal distribution using a power transform.
StandardScaler
Perform standardization that is faster, but less robust to outliers.
RobustScaler
Perform robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> import numpy as np
>>> from sklearn.preprocessing import QuantileTransformer
>>> rng = np.random.RandomState(0)
>>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0)
>>> qt = QuantileTransformer(n_quantiles=10, random_state=0)
>>> qt.fit_transform(X)
array([...])
Methods
fit(X[, y]) Compute the quantiles used for transforming.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Back-projection to the original space.
set_params(**params) Set the parameters of this estimator.
transform(X) Feature-wise transformation of the data.
fit(X, y=None) [source]
Compute the quantiles used for transforming. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False.
yNone
Ignored. Returns
selfobject
Fitted transformer.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Back-projection to the original space. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of (n_samples, n_features)
The projected data.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Feature-wise transformation of the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)
The projected data. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer |
sklearn.preprocessing.QuantileTransformer
class sklearn.preprocessing.QuantileTransformer(*, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=100000, random_state=None, copy=True) [source]
Transform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Read more in the User Guide. New in version 0.19. Parameters
n_quantilesint, default=1000 or n_samples
Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator.
output_distribution{‘uniform’, ‘normal’}, default=’uniform’
Marginal distribution for the transformed data. The choices are ‘uniform’ (default) or ‘normal’.
ignore_implicit_zerosbool, default=False
Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros.
subsampleint, default=1e5
Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices.
random_stateint, RandomState instance or None, default=None
Determines random number generation for subsampling and smoothing noise. Please see subsample for more details. Pass an int for reproducible results across multiple function calls. See Glossary
copybool, default=True
Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). Attributes
n_quantiles_int
The actual number of quantiles used to discretize the cumulative distribution function.
quantiles_ndarray of shape (n_quantiles, n_features)
The values corresponding the quantiles of reference.
references_ndarray of shape (n_quantiles, )
Quantiles of references. See also
quantile_transform
Equivalent function without the estimator API.
PowerTransformer
Perform mapping to a normal distribution using a power transform.
StandardScaler
Perform standardization that is faster, but less robust to outliers.
RobustScaler
Perform robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> import numpy as np
>>> from sklearn.preprocessing import QuantileTransformer
>>> rng = np.random.RandomState(0)
>>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0)
>>> qt = QuantileTransformer(n_quantiles=10, random_state=0)
>>> qt.fit_transform(X)
array([...])
Methods
fit(X[, y]) Compute the quantiles used for transforming.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Back-projection to the original space.
set_params(**params) Set the parameters of this estimator.
transform(X) Feature-wise transformation of the data.
fit(X, y=None) [source]
Compute the quantiles used for transforming. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False.
yNone
Ignored. Returns
selfobject
Fitted transformer.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Back-projection to the original space. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of (n_samples, n_features)
The projected data.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Feature-wise transformation of the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)
The projected data.
Examples using sklearn.preprocessing.QuantileTransformer
Partial Dependence and Individual Conditional Expectation Plots
Effect of transforming the targets in regression model
Map data to a normal distribution
Compare the effect of different scalers on data with outliers | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer |
fit(X, y=None) [source]
Compute the quantiles used for transforming. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False.
yNone
Ignored. Returns
selfobject
Fitted transformer. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer.get_params |
inverse_transform(X) [source]
Back-projection to the original space. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of (n_samples, n_features)
The projected data. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer.set_params |
transform(X) [source]
Feature-wise transformation of the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis. If a sparse matrix is provided, it will be converted into a sparse csc_matrix. Additionally, the sparse matrix needs to be nonnegative if ignore_implicit_zeros is False. Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)
The projected data. | sklearn.modules.generated.sklearn.preprocessing.quantiletransformer#sklearn.preprocessing.QuantileTransformer.transform |
sklearn.preprocessing.quantile_transform(X, *, axis=0, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=100000, random_state=None, copy=True) [source]
Transform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to transform.
axisint, default=0
Axis used to compute the means and standard deviations along. If 0, transform each feature, otherwise (if 1) transform each sample.
n_quantilesint, default=1000 or n_samples
Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator.
output_distribution{‘uniform’, ‘normal’}, default=’uniform’
Marginal distribution for the transformed data. The choices are ‘uniform’ (default) or ‘normal’.
ignore_implicit_zerosbool, default=False
Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros.
subsampleint, default=1e5
Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices.
random_stateint, RandomState instance or None, default=None
Determines random number generation for subsampling and smoothing noise. Please see subsample for more details. Pass an int for reproducible results across multiple function calls. See Glossary
copybool, default=True
Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). If True, a copy of X is transformed, leaving the original X unchanged ..versionchanged:: 0.23
The default value of copy changed from False to True in 0.23. Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. See also
QuantileTransformer
Performs quantile-based scaling using the Transformer API (e.g. as part of a preprocessing Pipeline).
power_transform
Maps data to a normal distribution using a power transformation.
scale
Performs standardization that is faster, but less robust to outliers.
robust_scale
Performs robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. Warning Risk of data leak Do not use quantile_transform unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using QuantileTransformer within a Pipeline in order to prevent most risks of data leaking:pipe = make_pipeline(QuantileTransformer(),
LogisticRegression()). For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> import numpy as np
>>> from sklearn.preprocessing import quantile_transform
>>> rng = np.random.RandomState(0)
>>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0)
>>> quantile_transform(X, n_quantiles=10, random_state=0, copy=True)
array([...]) | sklearn.modules.generated.sklearn.preprocessing.quantile_transform#sklearn.preprocessing.quantile_transform |
class sklearn.preprocessing.RobustScaler(*, with_centering=True, with_scaling=True, quantile_range=25.0, 75.0, copy=True, unit_variance=False) [source]
Scale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the transform method. Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results. New in version 0.17. Read more in the User Guide. Parameters
with_centeringbool, default=True
If True, center the data before scaling. This will cause transform to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
with_scalingbool, default=True
If True, scale the data to interquartile range.
quantile_rangetuple (q_min, q_max), 0.0 < q_min < q_max < 100.0, default=(25.0, 75.0), == (1st quantile, 3rd quantile), == IQR
Quantile range used to calculate scale_. New in version 0.18.
copybool, default=True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
unit_variancebool, default=False
If True, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of q_max and q_min for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. New in version 0.24. Attributes
center_array of floats
The median value for each feature in the training set.
scale_array of floats
The (scaled) interquartile range for each feature in the training set. New in version 0.17: scale_ attribute. See also
robust_scale
Equivalent function without the estimator API.
PCA
Further removes the linear correlation across features with ‘whiten=True’. Notes For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. https://en.wikipedia.org/wiki/Median https://en.wikipedia.org/wiki/Interquartile_range Examples >>> from sklearn.preprocessing import RobustScaler
>>> X = [[ 1., -2., 2.],
... [ -2., 1., 3.],
... [ 4., 1., -2.]]
>>> transformer = RobustScaler().fit(X)
>>> transformer
RobustScaler()
>>> transformer.transform(X)
array([[ 0. , -2. , 0. ],
[-1. , 0. , 0.4],
[ 1. , 0. , -1.6]])
Methods
fit(X[, y]) Compute the median and quantiles to be used for scaling.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Scale back the data to the original representation
set_params(**params) Set the parameters of this estimator.
transform(X) Center and scale the data.
fit(X, y=None) [source]
Compute the median and quantiles to be used for scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the median and quantiles used for later scaling along the features axis.
yNone
Ignored. Returns
selfobject
Fitted scaler.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The rescaled data to be transformed back. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Center and scale the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the specified axis. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler |
sklearn.preprocessing.RobustScaler
class sklearn.preprocessing.RobustScaler(*, with_centering=True, with_scaling=True, quantile_range=25.0, 75.0, copy=True, unit_variance=False) [source]
Scale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile). Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Median and interquartile range are then stored to be used on later data using the transform method. Standardization of a dataset is a common requirement for many machine learning estimators. Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results. New in version 0.17. Read more in the User Guide. Parameters
with_centeringbool, default=True
If True, center the data before scaling. This will cause transform to raise an exception when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
with_scalingbool, default=True
If True, scale the data to interquartile range.
quantile_rangetuple (q_min, q_max), 0.0 < q_min < q_max < 100.0, default=(25.0, 75.0), == (1st quantile, 3rd quantile), == IQR
Quantile range used to calculate scale_. New in version 0.18.
copybool, default=True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
unit_variancebool, default=False
If True, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of q_max and q_min for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. New in version 0.24. Attributes
center_array of floats
The median value for each feature in the training set.
scale_array of floats
The (scaled) interquartile range for each feature in the training set. New in version 0.17: scale_ attribute. See also
robust_scale
Equivalent function without the estimator API.
PCA
Further removes the linear correlation across features with ‘whiten=True’. Notes For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. https://en.wikipedia.org/wiki/Median https://en.wikipedia.org/wiki/Interquartile_range Examples >>> from sklearn.preprocessing import RobustScaler
>>> X = [[ 1., -2., 2.],
... [ -2., 1., 3.],
... [ 4., 1., -2.]]
>>> transformer = RobustScaler().fit(X)
>>> transformer
RobustScaler()
>>> transformer.transform(X)
array([[ 0. , -2. , 0. ],
[-1. , 0. , 0.4],
[ 1. , 0. , -1.6]])
Methods
fit(X[, y]) Compute the median and quantiles to be used for scaling.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X) Scale back the data to the original representation
set_params(**params) Set the parameters of this estimator.
transform(X) Center and scale the data.
fit(X, y=None) [source]
Compute the median and quantiles to be used for scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the median and quantiles used for later scaling along the features axis.
yNone
Ignored. Returns
selfobject
Fitted scaler.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The rescaled data to be transformed back. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Center and scale the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the specified axis. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
Examples using sklearn.preprocessing.RobustScaler
Compare the effect of different scalers on data with outliers | sklearn.modules.generated.sklearn.preprocessing.robustscaler |
fit(X, y=None) [source]
Compute the median and quantiles to be used for scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the median and quantiles used for later scaling along the features axis.
yNone
Ignored. Returns
selfobject
Fitted scaler. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler.get_params |
inverse_transform(X) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The rescaled data to be transformed back. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler.inverse_transform |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler.set_params |
transform(X) [source]
Center and scale the data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the specified axis. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.robustscaler#sklearn.preprocessing.RobustScaler.transform |
sklearn.preprocessing.robust_scale(X, *, axis=0, with_centering=True, with_scaling=True, quantile_range=25.0, 75.0, copy=True, unit_variance=False) [source]
Standardize a dataset along any axis Center to the median and component wise scale according to the interquartile range. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_sample, n_features)
The data to center and scale.
axisint, default=0
axis used to compute the medians and IQR along. If 0, independently scale each feature, otherwise (if 1) scale each sample.
with_centeringbool, default=True
If True, center the data before scaling.
with_scalingbool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation).
quantile_rangetuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
default=(25.0, 75.0), == (1st quantile, 3rd quantile), == IQR Quantile range used to calculate scale_. New in version 0.18.
copybool, default=True
set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSR matrix and if axis is 1).
unit_variancebool, default=False
If True, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of q_max and q_min for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. New in version 0.24. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. See also
RobustScaler
Performs centering and scaling using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly with_centering=False (in that case, only variance scaling will be performed on the features of the CSR matrix) or to call X.toarray() if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSR matrix. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Warning Risk of data leak Do not use robust_scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using RobustScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(RobustScaler(), LogisticRegression()). | sklearn.modules.generated.sklearn.preprocessing.robust_scale#sklearn.preprocessing.robust_scale |
sklearn.preprocessing.scale(X, *, axis=0, with_mean=True, with_std=True, copy=True) [source]
Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to center and scale.
axisint, default=0
axis used to compute the means and standard deviations along. If 0, independently standardize each feature, otherwise (if 1) standardize each sample.
with_meanbool, default=True
If True, center the data before scaling.
with_stdbool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation).
copybool, default=True
set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSC matrix and if axis is 1). Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. See also
StandardScaler
Performs scaling to unit variance using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly with_mean=False (in that case, only variance scaling will be performed on the features of the CSC matrix) or to call X.toarray() if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSC matrix. NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. We use a biased estimator for the standard deviation, equivalent to numpy.std(x, ddof=0). Note that the choice of ddof is unlikely to affect model performance. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Warning Risk of data leak Do not use scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using StandardScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(StandardScaler(), LogisticRegression()). | sklearn.modules.generated.sklearn.preprocessing.scale#sklearn.preprocessing.scale |
class sklearn.preprocessing.StandardScaler(*, copy=True, with_mean=True, with_std=True) [source]
Standardize features by removing the mean and scaling to unit variance The standard score of a sample x is calculated as: z = (x - u) / s where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using transform. Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance). For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. This scaler can also be applied to sparse CSR or CSC matrices by passing with_mean=False to avoid breaking the sparsity structure of the data. Read more in the User Guide. Parameters
copybool, default=True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
with_meanbool, default=True
If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
with_stdbool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation). Attributes
scale_ndarray of shape (n_features,) or None
Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using np.sqrt(var_). If a variance is zero, we can’t achieve unit variance, and the data is left as-is, giving a scaling factor of 1. scale_ is equal to None when with_std=False. New in version 0.17: scale_
mean_ndarray of shape (n_features,) or None
The mean value for each feature in the training set. Equal to None when with_mean=False.
var_ndarray of shape (n_features,) or None
The variance for each feature in the training set. Used to compute scale_. Equal to None when with_std=False.
n_samples_seen_int or ndarray of shape (n_features,)
The number of samples processed by the estimator for each feature. If there are no missing samples, the n_samples_seen will be an integer, otherwise it will be an array of dtype int. If sample_weights are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments across partial_fit calls. See also
scale
Equivalent function without the estimator API.
PCA
Further removes the linear correlation across features with ‘whiten=True’. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. We use a biased estimator for the standard deviation, equivalent to numpy.std(x, ddof=0). Note that the choice of ddof is unlikely to affect model performance. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> from sklearn.preprocessing import StandardScaler
>>> data = [[0, 0], [0, 0], [1, 1], [1, 1]]
>>> scaler = StandardScaler()
>>> print(scaler.fit(data))
StandardScaler()
>>> print(scaler.mean_)
[0.5 0.5]
>>> print(scaler.transform(data))
[[-1. -1.]
[-1. -1.]
[ 1. 1.]
[ 1. 1.]]
>>> print(scaler.transform([[2, 2]]))
[[3. 3.]]
Methods
fit(X[, y, sample_weight]) Compute the mean and std to be used for later scaling.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X[, copy]) Scale back the data to the original representation
partial_fit(X[, y, sample_weight]) Online computation of mean and std on X for later scaling.
set_params(**params) Set the parameters of this estimator.
transform(X[, copy]) Perform standardization by centering and scaling
fit(X, y=None, sample_weight=None) [source]
Compute the mean and std to be used for later scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored.
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns
selfobject
Fitted scaler.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X, copy=None) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
partial_fit(X, y=None, sample_weight=None) [source]
Online computation of mean and std on X for later scaling. All of X is processed as a single batch. This is intended for cases when fit is not feasible due to very large number of n_samples or because X is read from a continuous stream. The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. “Algorithms for computing the sample variance: Analysis and recommendations.” The American Statistician 37.3 (1983): 242-247: Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored.
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns
selfobject
Fitted scaler.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, copy=None) [source]
Perform standardization by centering and scaling Parameters
X{array-like, sparse matrix of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler |
sklearn.preprocessing.StandardScaler
class sklearn.preprocessing.StandardScaler(*, copy=True, with_mean=True, with_std=True) [source]
Standardize features by removing the mean and scaling to unit variance The standard score of a sample x is calculated as: z = (x - u) / s where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False. Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using transform. Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance). For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. This scaler can also be applied to sparse CSR or CSC matrices by passing with_mean=False to avoid breaking the sparsity structure of the data. Read more in the User Guide. Parameters
copybool, default=True
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
with_meanbool, default=True
If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
with_stdbool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation). Attributes
scale_ndarray of shape (n_features,) or None
Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using np.sqrt(var_). If a variance is zero, we can’t achieve unit variance, and the data is left as-is, giving a scaling factor of 1. scale_ is equal to None when with_std=False. New in version 0.17: scale_
mean_ndarray of shape (n_features,) or None
The mean value for each feature in the training set. Equal to None when with_mean=False.
var_ndarray of shape (n_features,) or None
The variance for each feature in the training set. Used to compute scale_. Equal to None when with_std=False.
n_samples_seen_int or ndarray of shape (n_features,)
The number of samples processed by the estimator for each feature. If there are no missing samples, the n_samples_seen will be an integer, otherwise it will be an array of dtype int. If sample_weights are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments across partial_fit calls. See also
scale
Equivalent function without the estimator API.
PCA
Further removes the linear correlation across features with ‘whiten=True’. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. We use a biased estimator for the standard deviation, equivalent to numpy.std(x, ddof=0). Note that the choice of ddof is unlikely to affect model performance. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> from sklearn.preprocessing import StandardScaler
>>> data = [[0, 0], [0, 0], [1, 1], [1, 1]]
>>> scaler = StandardScaler()
>>> print(scaler.fit(data))
StandardScaler()
>>> print(scaler.mean_)
[0.5 0.5]
>>> print(scaler.transform(data))
[[-1. -1.]
[-1. -1.]
[ 1. 1.]
[ 1. 1.]]
>>> print(scaler.transform([[2, 2]]))
[[3. 3.]]
Methods
fit(X[, y, sample_weight]) Compute the mean and std to be used for later scaling.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
inverse_transform(X[, copy]) Scale back the data to the original representation
partial_fit(X[, y, sample_weight]) Online computation of mean and std on X for later scaling.
set_params(**params) Set the parameters of this estimator.
transform(X[, copy]) Perform standardization by centering and scaling
fit(X, y=None, sample_weight=None) [source]
Compute the mean and std to be used for later scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored.
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns
selfobject
Fitted scaler.
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
inverse_transform(X, copy=None) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
partial_fit(X, y=None, sample_weight=None) [source]
Online computation of mean and std on X for later scaling. All of X is processed as a single batch. This is intended for cases when fit is not feasible due to very large number of n_samples or because X is read from a continuous stream. The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. “Algorithms for computing the sample variance: Analysis and recommendations.” The American Statistician 37.3 (1983): 242-247: Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored.
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns
selfobject
Fitted scaler.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X, copy=None) [source]
Perform standardization by centering and scaling Parameters
X{array-like, sparse matrix of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array.
Examples using sklearn.preprocessing.StandardScaler
Release Highlights for scikit-learn 0.23
Release Highlights for scikit-learn 0.22
Classifier comparison
Demo of DBSCAN clustering algorithm
Comparing different hierarchical linkage methods on toy datasets
A demo of K-Means clustering on the handwritten digits data
Comparing different clustering algorithms on toy datasets
Principal Component Regression vs Partial Least Squares Regression
Factor Analysis (with rotation) to visualize patterns
Combine predictors using stacking
Prediction Latency
MNIST classification using multinomial logistic + L1
L1 Penalty and Sparsity in Logistic Regression
Poisson regression and non-normal loss
Tweedie regression on insurance claims
Common pitfalls in interpretation of coefficients of linear models
Visualizations with Display Objects
Advanced Plotting With Partial Dependence
Detection error tradeoff (DET) curve
Comparing Nearest Neighbors with and without Neighborhood Components Analysis
Dimensionality Reduction with Neighborhood Components Analysis
Varying regularization in Multi-layer Perceptron
Column Transformer with Mixed Types
Importance of Feature Scaling
Feature discretization
Compare the effect of different scalers on data with outliers
SVM-Anova: SVM with univariate feature selection
RBF SVM parameters | sklearn.modules.generated.sklearn.preprocessing.standardscaler |
fit(X, y=None, sample_weight=None) [source]
Compute the mean and std to be used for later scaling. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored.
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns
selfobject
Fitted scaler. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.get_params |
inverse_transform(X, copy=None) [source]
Scale back the data to the original representation Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.inverse_transform |
partial_fit(X, y=None, sample_weight=None) [source]
Online computation of mean and std on X for later scaling. All of X is processed as a single batch. This is intended for cases when fit is not feasible due to very large number of n_samples or because X is read from a continuous stream. The algorithm for incremental mean and std is given in Equation 1.5a,b in Chan, Tony F., Gene H. Golub, and Randall J. LeVeque. “Algorithms for computing the sample variance: Analysis and recommendations.” The American Statistician 37.3 (1983): 242-247: Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data used to compute the mean and standard deviation used for later scaling along the features axis.
yNone
Ignored.
sample_weightarray-like of shape (n_samples,), default=None
Individual weights for each sample. New in version 0.24: parameter sample_weight support to StandardScaler. Returns
selfobject
Fitted scaler. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.partial_fit |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.set_params |
transform(X, copy=None) [source]
Perform standardization by centering and scaling Parameters
X{array-like, sparse matrix of shape (n_samples, n_features)
The data used to scale along the features axis.
copybool, default=None
Copy the input X or not. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
Transformed array. | sklearn.modules.generated.sklearn.preprocessing.standardscaler#sklearn.preprocessing.StandardScaler.transform |
class sklearn.random_projection.GaussianRandomProjection(n_components='auto', *, eps=0.1, random_state=None) [source]
Reduce dimensionality through Gaussian random projection. The components of the random matrix are drawn from N(0, 1 / n_components). Read more in the User Guide. New in version 0.13. Parameters
n_componentsint or ‘auto’, default=’auto’
Dimensionality of the target projection space. n_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the eps parameter. It should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.
epsfloat, default=0.1
Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. The value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
n_components_int
Concrete number of components computed when n_components=”auto”.
components_ndarray of shape (n_components, n_features)
Random matrix used for the projection. See also
SparseRandomProjection
Examples >>> import numpy as np
>>> from sklearn.random_projection import GaussianRandomProjection
>>> rng = np.random.RandomState(42)
>>> X = rng.rand(100, 10000)
>>> transformer = GaussianRandomProjection(random_state=rng)
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
Methods
fit(X[, y]) Generate a sparse random projection matrix.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Project the data by using matrix product with the random matrix
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array. | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection |
sklearn.random_projection.GaussianRandomProjection
class sklearn.random_projection.GaussianRandomProjection(n_components='auto', *, eps=0.1, random_state=None) [source]
Reduce dimensionality through Gaussian random projection. The components of the random matrix are drawn from N(0, 1 / n_components). Read more in the User Guide. New in version 0.13. Parameters
n_componentsint or ‘auto’, default=’auto’
Dimensionality of the target projection space. n_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the eps parameter. It should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.
epsfloat, default=0.1
Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. The value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
n_components_int
Concrete number of components computed when n_components=”auto”.
components_ndarray of shape (n_components, n_features)
Random matrix used for the projection. See also
SparseRandomProjection
Examples >>> import numpy as np
>>> from sklearn.random_projection import GaussianRandomProjection
>>> rng = np.random.RandomState(42)
>>> X = rng.rand(100, 10000)
>>> transformer = GaussianRandomProjection(random_state=rng)
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
Methods
fit(X[, y]) Generate a sparse random projection matrix.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Project the data by using matrix product with the random matrix
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array. | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection |
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection.set_params |
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array. | sklearn.modules.generated.sklearn.random_projection.gaussianrandomprojection#sklearn.random_projection.GaussianRandomProjection.transform |
sklearn.random_projection.johnson_lindenstrauss_min_dim(n_samples, *, eps=0.1) [source]
Find a ‘safe’ number of components to randomly project to. The distortion introduced by a random projection p only changes the distance between two points by a factor (1 +- eps) in an euclidean space with good probability. The projection p is an eps-embedding as defined by: (1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2 Where u and v are any rows taken from a dataset of shape (n_samples, n_features), eps is in ]0, 1[ and p is a projection by a random Gaussian N(0, 1) matrix of shape (n_components, n_features) (or a sparse Achlioptas matrix). The minimum number of components to guarantee the eps-embedding is given by: n_components >= 4 log(n_samples) / (eps^2 / 2 - eps^3 / 3) Note that the number of dimensions is independent of the original number of features but instead depends on the size of the dataset: the larger the dataset, the higher is the minimal dimensionality of an eps-embedding. Read more in the User Guide. Parameters
n_samplesint or array-like of int
Number of samples that should be a integer greater than 0. If an array is given, it will compute a safe number of components array-wise.
epsfloat or ndarray of shape (n_components,), dtype=float, default=0.1
Maximum distortion rate in the range (0,1 ) as defined by the Johnson-Lindenstrauss lemma. If an array is given, it will compute a safe number of components array-wise. Returns
n_componentsint or ndarray of int
The minimal number of components to guarantee with good probability an eps-embedding with n_samples. References
1
https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma
2
Sanjoy Dasgupta and Anupam Gupta, 1999, “An elementary proof of the Johnson-Lindenstrauss Lemma.” http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.3654 Examples >>> johnson_lindenstrauss_min_dim(1e6, eps=0.5)
663
>>> johnson_lindenstrauss_min_dim(1e6, eps=[0.5, 0.1, 0.01])
array([ 663, 11841, 1112658])
>>> johnson_lindenstrauss_min_dim([1e4, 1e5, 1e6], eps=0.1)
array([ 7894, 9868, 11841]) | sklearn.modules.generated.sklearn.random_projection.johnson_lindenstrauss_min_dim#sklearn.random_projection.johnson_lindenstrauss_min_dim |
class sklearn.random_projection.SparseRandomProjection(n_components='auto', *, density='auto', eps=0.1, dense_output=False, random_state=None) [source]
Reduce dimensionality through sparse random projection. Sparse random matrix is an alternative to dense random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data. If we note s = 1 / density the components of the random matrix are drawn from: -sqrt(s) / sqrt(n_components) with probability 1 / 2s 0 with probability 1 - 1 / s +sqrt(s) / sqrt(n_components) with probability 1 / 2s Read more in the User Guide. New in version 0.13. Parameters
n_componentsint or ‘auto’, default=’auto’
Dimensionality of the target projection space. n_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the eps parameter. It should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.
densityfloat or ‘auto’, default=’auto’
Ratio in the range (0, 1] of non-zero component in the random projection matrix. If density = ‘auto’, the value is set to the minimum density as recommended by Ping Li et al.: 1 / sqrt(n_features). Use density = 1 / 3.0 if you want to reproduce the results from Achlioptas, 2001.
epsfloat, default=0.1
Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. This value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.
dense_outputbool, default=False
If True, ensure that the output of the random projection is a dense numpy array even if the input and random projection matrix are both sparse. In practice, if the number of components is small the number of zero components in the projected data will be very small and it will be more CPU and memory efficient to use a dense representation. If False, the projected data uses a sparse representation if the input is sparse.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
n_components_int
Concrete number of components computed when n_components=”auto”.
components_sparse matrix of shape (n_components, n_features)
Random matrix used for the projection. Sparse matrix will be of CSR format.
density_float in range 0.0 - 1.0
Concrete density computed from when density = “auto”. See also
GaussianRandomProjection
References
1
Ping Li, T. Hastie and K. W. Church, 2006, “Very Sparse Random Projections”. https://web.stanford.edu/~hastie/Papers/Ping/KDD06_rp.pdf
2
D. Achlioptas, 2001, “Database-friendly random projections”, https://users.soe.ucsc.edu/~optas/papers/jl.pdf Examples >>> import numpy as np
>>> from sklearn.random_projection import SparseRandomProjection
>>> rng = np.random.RandomState(42)
>>> X = rng.rand(100, 10000)
>>> transformer = SparseRandomProjection(random_state=rng)
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
>>> # very few components are non-zero
>>> np.mean(transformer.components_ != 0)
0.0100...
Methods
fit(X[, y]) Generate a sparse random projection matrix.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Project the data by using matrix product with the random matrix
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array. | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection |
sklearn.random_projection.SparseRandomProjection
class sklearn.random_projection.SparseRandomProjection(n_components='auto', *, density='auto', eps=0.1, dense_output=False, random_state=None) [source]
Reduce dimensionality through sparse random projection. Sparse random matrix is an alternative to dense random projection matrix that guarantees similar embedding quality while being much more memory efficient and allowing faster computation of the projected data. If we note s = 1 / density the components of the random matrix are drawn from: -sqrt(s) / sqrt(n_components) with probability 1 / 2s 0 with probability 1 - 1 / s +sqrt(s) / sqrt(n_components) with probability 1 / 2s Read more in the User Guide. New in version 0.13. Parameters
n_componentsint or ‘auto’, default=’auto’
Dimensionality of the target projection space. n_components can be automatically adjusted according to the number of samples in the dataset and the bound given by the Johnson-Lindenstrauss lemma. In that case the quality of the embedding is controlled by the eps parameter. It should be noted that Johnson-Lindenstrauss lemma can yield very conservative estimated of the required number of components as it makes no assumption on the structure of the dataset.
densityfloat or ‘auto’, default=’auto’
Ratio in the range (0, 1] of non-zero component in the random projection matrix. If density = ‘auto’, the value is set to the minimum density as recommended by Ping Li et al.: 1 / sqrt(n_features). Use density = 1 / 3.0 if you want to reproduce the results from Achlioptas, 2001.
epsfloat, default=0.1
Parameter to control the quality of the embedding according to the Johnson-Lindenstrauss lemma when n_components is set to ‘auto’. This value should be strictly positive. Smaller values lead to better embedding and higher number of dimensions (n_components) in the target projection space.
dense_outputbool, default=False
If True, ensure that the output of the random projection is a dense numpy array even if the input and random projection matrix are both sparse. In practice, if the number of components is small the number of zero components in the projected data will be very small and it will be more CPU and memory efficient to use a dense representation. If False, the projected data uses a sparse representation if the input is sparse.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generator used to generate the projection matrix at fit time. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
n_components_int
Concrete number of components computed when n_components=”auto”.
components_sparse matrix of shape (n_components, n_features)
Random matrix used for the projection. Sparse matrix will be of CSR format.
density_float in range 0.0 - 1.0
Concrete density computed from when density = “auto”. See also
GaussianRandomProjection
References
1
Ping Li, T. Hastie and K. W. Church, 2006, “Very Sparse Random Projections”. https://web.stanford.edu/~hastie/Papers/Ping/KDD06_rp.pdf
2
D. Achlioptas, 2001, “Database-friendly random projections”, https://users.soe.ucsc.edu/~optas/papers/jl.pdf Examples >>> import numpy as np
>>> from sklearn.random_projection import SparseRandomProjection
>>> rng = np.random.RandomState(42)
>>> X = rng.rand(100, 10000)
>>> transformer = SparseRandomProjection(random_state=rng)
>>> X_new = transformer.fit_transform(X)
>>> X_new.shape
(100, 3947)
>>> # very few components are non-zero
>>> np.mean(transformer.components_ != 0)
0.0100...
Methods
fit(X[, y]) Generate a sparse random projection matrix.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(X) Project the data by using matrix product with the random matrix
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array.
Examples using sklearn.random_projection.SparseRandomProjection
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
The Johnson-Lindenstrauss bound for embedding with random projections | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection |
fit(X, y=None) [source]
Generate a sparse random projection matrix. Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Training set: only the shape is used to find optimal random matrix dimensions based on the theory referenced in the afore mentioned papers. y
Ignored Returns
self | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection.fit |
fit_transform(X, y=None, **fit_params) [source]
Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters
Xarray-like of shape (n_samples, n_features)
Input samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
Target values (None for unsupervised transformations).
**fit_paramsdict
Additional fit parameters. Returns
X_newndarray array of shape (n_samples, n_features_new)
Transformed array. | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection.fit_transform |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection.get_params |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection.set_params |
transform(X) [source]
Project the data by using matrix product with the random matrix Parameters
X{ndarray, sparse matrix} of shape (n_samples, n_features)
The input data to project into a smaller dimensional space. Returns
X_new{ndarray, sparse matrix} of shape (n_samples, n_components)
Projected array. | sklearn.modules.generated.sklearn.random_projection.sparserandomprojection#sklearn.random_projection.SparseRandomProjection.transform |
class sklearn.semi_supervised.LabelPropagation(kernel='rbf', *, gamma=20, n_neighbors=7, max_iter=1000, tol=0.001, n_jobs=None) [source]
Label Propagation classifier Read more in the User Guide. Parameters
kernel{‘knn’, ‘rbf’} or callable, default=’rbf’
String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape (n_samples, n_features), and return a (n_samples, n_samples) shaped weight matrix.
gammafloat, default=20
Parameter for rbf kernel.
n_neighborsint, default=7
Parameter for knn kernel which need to be strictly positive.
max_iterint, default=1000
Change maximum number of iterations allowed.
tolfloat, 1e-3
Convergence tolerance: threshold to consider the system at steady state.
n_jobsint, default=None
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
X_ndarray of shape (n_samples, n_features)
Input array.
classes_ndarray of shape (n_classes,)
The distinct labels used in classifying instances.
label_distributions_ndarray of shape (n_samples, n_classes)
Categorical distribution for each item.
transduction_ndarray of shape (n_samples)
Label assigned to each item via the transduction.
n_iter_int
Number of iterations run. See also
LabelSpreading
Alternate label propagation strategy more robust to noise. References Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002 http://pages.cs.wisc.edu/~jerryzhu/pub/CMU-CALD-02-107.pdf Examples >>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelPropagation
>>> label_prop_model = LabelPropagation()
>>> iris = datasets.load_iris()
>>> rng = np.random.RandomState(42)
>>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
LabelPropagation(...)
Methods
fit(X, y) Fit a semi-supervised label propagation model based
get_params([deep]) Get parameters for this estimator.
predict(X) Performs inductive inference across the model.
predict_proba(X) Predict probability for each possible outcome.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters
Xarray-like of shape (n_samples, n_features)
A matrix of shape (n_samples, n_samples) will be created from this.
yarray-like of shape (n_samples,)
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Performs inductive inference across the model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
yndarray of shape (n_samples,)
Predictions for input data.
predict_proba(X) [source]
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation |
sklearn.semi_supervised.LabelPropagation
class sklearn.semi_supervised.LabelPropagation(kernel='rbf', *, gamma=20, n_neighbors=7, max_iter=1000, tol=0.001, n_jobs=None) [source]
Label Propagation classifier Read more in the User Guide. Parameters
kernel{‘knn’, ‘rbf’} or callable, default=’rbf’
String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape (n_samples, n_features), and return a (n_samples, n_samples) shaped weight matrix.
gammafloat, default=20
Parameter for rbf kernel.
n_neighborsint, default=7
Parameter for knn kernel which need to be strictly positive.
max_iterint, default=1000
Change maximum number of iterations allowed.
tolfloat, 1e-3
Convergence tolerance: threshold to consider the system at steady state.
n_jobsint, default=None
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
X_ndarray of shape (n_samples, n_features)
Input array.
classes_ndarray of shape (n_classes,)
The distinct labels used in classifying instances.
label_distributions_ndarray of shape (n_samples, n_classes)
Categorical distribution for each item.
transduction_ndarray of shape (n_samples)
Label assigned to each item via the transduction.
n_iter_int
Number of iterations run. See also
LabelSpreading
Alternate label propagation strategy more robust to noise. References Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002 http://pages.cs.wisc.edu/~jerryzhu/pub/CMU-CALD-02-107.pdf Examples >>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelPropagation
>>> label_prop_model = LabelPropagation()
>>> iris = datasets.load_iris()
>>> rng = np.random.RandomState(42)
>>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
LabelPropagation(...)
Methods
fit(X, y) Fit a semi-supervised label propagation model based
get_params([deep]) Get parameters for this estimator.
predict(X) Performs inductive inference across the model.
predict_proba(X) Predict probability for each possible outcome.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters
Xarray-like of shape (n_samples, n_features)
A matrix of shape (n_samples, n_samples) will be created from this.
yarray-like of shape (n_samples,)
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Performs inductive inference across the model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
yndarray of shape (n_samples,)
Predictions for input data.
predict_proba(X) [source]
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation |
fit(X, y) [source]
Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters
Xarray-like of shape (n_samples, n_features)
A matrix of shape (n_samples, n_samples) will be created from this.
yarray-like of shape (n_samples,)
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns
selfobject | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation.get_params |
predict(X) [source]
Performs inductive inference across the model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
yndarray of shape (n_samples,)
Predictions for input data. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation.predict |
predict_proba(X) [source]
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.labelpropagation#sklearn.semi_supervised.LabelPropagation.set_params |
class sklearn.semi_supervised.LabelSpreading(kernel='rbf', *, gamma=20, n_neighbors=7, alpha=0.2, max_iter=30, tol=0.001, n_jobs=None) [source]
LabelSpreading model for semi-supervised learning This model is similar to the basic Label Propagation algorithm, but uses affinity matrix based on the normalized graph Laplacian and soft clamping across the labels. Read more in the User Guide. Parameters
kernel{‘knn’, ‘rbf’} or callable, default=’rbf’
String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape (n_samples, n_features), and return a (n_samples, n_samples) shaped weight matrix.
gammafloat, default=20
Parameter for rbf kernel.
n_neighborsint, default=7
Parameter for knn kernel which is a strictly positive integer.
alphafloat, default=0.2
Clamping factor. A value in (0, 1) that specifies the relative amount that an instance should adopt the information from its neighbors as opposed to its initial label. alpha=0 means keeping the initial label information; alpha=1 means replacing all initial information.
max_iterint, default=30
Maximum number of iterations allowed.
tolfloat, default=1e-3
Convergence tolerance: threshold to consider the system at steady state.
n_jobsint, default=None
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
X_ndarray of shape (n_samples, n_features)
Input array.
classes_ndarray of shape (n_classes,)
The distinct labels used in classifying instances.
label_distributions_ndarray of shape (n_samples, n_classes)
Categorical distribution for each item.
transduction_ndarray of shape (n_samples,)
Label assigned to each item via the transduction.
n_iter_int
Number of iterations run. See also
LabelPropagation
Unregularized graph based semi-supervised learning. References Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, Bernhard Schoelkopf. Learning with local and global consistency (2004) http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.3219 Examples >>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelSpreading
>>> label_prop_model = LabelSpreading()
>>> iris = datasets.load_iris()
>>> rng = np.random.RandomState(42)
>>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
LabelSpreading(...)
Methods
fit(X, y) Fit a semi-supervised label propagation model based
get_params([deep]) Get parameters for this estimator.
predict(X) Performs inductive inference across the model.
predict_proba(X) Predict probability for each possible outcome.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters
Xarray-like of shape (n_samples, n_features)
A matrix of shape (n_samples, n_samples) will be created from this.
yarray-like of shape (n_samples,)
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Performs inductive inference across the model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
yndarray of shape (n_samples,)
Predictions for input data.
predict_proba(X) [source]
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading |
sklearn.semi_supervised.LabelSpreading
class sklearn.semi_supervised.LabelSpreading(kernel='rbf', *, gamma=20, n_neighbors=7, alpha=0.2, max_iter=30, tol=0.001, n_jobs=None) [source]
LabelSpreading model for semi-supervised learning This model is similar to the basic Label Propagation algorithm, but uses affinity matrix based on the normalized graph Laplacian and soft clamping across the labels. Read more in the User Guide. Parameters
kernel{‘knn’, ‘rbf’} or callable, default=’rbf’
String identifier for kernel function to use or the kernel function itself. Only ‘rbf’ and ‘knn’ strings are valid inputs. The function passed should take two inputs, each of shape (n_samples, n_features), and return a (n_samples, n_samples) shaped weight matrix.
gammafloat, default=20
Parameter for rbf kernel.
n_neighborsint, default=7
Parameter for knn kernel which is a strictly positive integer.
alphafloat, default=0.2
Clamping factor. A value in (0, 1) that specifies the relative amount that an instance should adopt the information from its neighbors as opposed to its initial label. alpha=0 means keeping the initial label information; alpha=1 means replacing all initial information.
max_iterint, default=30
Maximum number of iterations allowed.
tolfloat, default=1e-3
Convergence tolerance: threshold to consider the system at steady state.
n_jobsint, default=None
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes
X_ndarray of shape (n_samples, n_features)
Input array.
classes_ndarray of shape (n_classes,)
The distinct labels used in classifying instances.
label_distributions_ndarray of shape (n_samples, n_classes)
Categorical distribution for each item.
transduction_ndarray of shape (n_samples,)
Label assigned to each item via the transduction.
n_iter_int
Number of iterations run. See also
LabelPropagation
Unregularized graph based semi-supervised learning. References Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, Bernhard Schoelkopf. Learning with local and global consistency (2004) http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.115.3219 Examples >>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import LabelSpreading
>>> label_prop_model = LabelSpreading()
>>> iris = datasets.load_iris()
>>> rng = np.random.RandomState(42)
>>> random_unlabeled_points = rng.rand(len(iris.target)) < 0.3
>>> labels = np.copy(iris.target)
>>> labels[random_unlabeled_points] = -1
>>> label_prop_model.fit(iris.data, labels)
LabelSpreading(...)
Methods
fit(X, y) Fit a semi-supervised label propagation model based
get_params([deep]) Get parameters for this estimator.
predict(X) Performs inductive inference across the model.
predict_proba(X) Predict probability for each possible outcome.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
fit(X, y) [source]
Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters
Xarray-like of shape (n_samples, n_features)
A matrix of shape (n_samples, n_samples) will be created from this.
yarray-like of shape (n_samples,)
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Performs inductive inference across the model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
yndarray of shape (n_samples,)
Predictions for input data.
predict_proba(X) [source]
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.semi_supervised.LabelSpreading
Label Propagation learning a complex structure
Label Propagation digits: Demonstrating performance
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset
Semi-supervised Classification on a Text Dataset
Label Propagation digits active learning | sklearn.modules.generated.sklearn.semi_supervised.labelspreading |
fit(X, y) [source]
Fit a semi-supervised label propagation model based All the input data is provided matrix X (labeled and unlabeled) and corresponding label matrix y with a dedicated marker value for unlabeled samples. Parameters
Xarray-like of shape (n_samples, n_features)
A matrix of shape (n_samples, n_samples) will be created from this.
yarray-like of shape (n_samples,)
n_labeled_samples (unlabeled points are marked as -1) All unlabeled samples will be transductively assigned labels. Returns
selfobject | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading.get_params |
predict(X) [source]
Performs inductive inference across the model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
yndarray of shape (n_samples,)
Predictions for input data. | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading.predict |
predict_proba(X) [source]
Predict probability for each possible outcome. Compute the probability estimates for each single sample in X and each possible outcome seen during training (categorical distribution). Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
probabilitiesndarray of shape (n_samples, n_classes)
Normalized probability distributions across class labels. | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.labelspreading#sklearn.semi_supervised.LabelSpreading.set_params |
class sklearn.semi_supervised.SelfTrainingClassifier(base_estimator, threshold=0.75, criterion='threshold', k_best=10, max_iter=10, verbose=False) [source]
Self-training classifier. This class allows a given supervised classifier to function as a semi-supervised classifier, allowing it to learn from unlabeled data. It does this by iteratively predicting pseudo-labels for the unlabeled data and adding them to the training set. The classifier will continue iterating until either max_iter is reached, or no pseudo-labels were added to the training set in the previous iteration. Read more in the User Guide. Parameters
base_estimatorestimator object
An estimator object implementing fit and predict_proba. Invoking the fit method will fit a clone of the passed estimator, which will be stored in the base_estimator_ attribute.
criterion{‘threshold’, ‘k_best’}, default=’threshold’
The selection criterion used to select which labels to add to the training set. If ‘threshold’, pseudo-labels with prediction probabilities above threshold are added to the dataset. If ‘k_best’, the k_best pseudo-labels with highest prediction probabilities are added to the dataset. When using the ‘threshold’ criterion, a well calibrated classifier should be used.
thresholdfloat, default=0.75
The decision threshold for use with criterion='threshold'. Should be in [0, 1). When using the ‘threshold’ criterion, a well calibrated classifier should be used.
k_bestint, default=10
The amount of samples to add in each iteration. Only used when criterion is k_best’.
max_iterint or None, default=10
Maximum number of iterations allowed. Should be greater than or equal to 0. If it is None, the classifier will continue to predict labels until no new pseudo-labels are added, or all unlabeled samples have been labeled. verbose: bool, default=False
Enable verbose output. Attributes
base_estimator_estimator object
The fitted estimator.
classes_ndarray or list of ndarray of shape (n_classes,)
Class labels for each output. (Taken from the trained base_estimator_).
transduction_ndarray of shape (n_samples,)
The labels used for the final fit of the classifier, including pseudo-labels added during fit.
labeled_iter_ndarray of shape (n_samples,)
The iteration in which each sample was labeled. When a sample has iteration 0, the sample was already labeled in the original dataset. When a sample has iteration -1, the sample was not labeled in any iteration.
n_iter_int
The number of rounds of self-training, that is the number of times the base estimator is fitted on relabeled variants of the training set.
termination_condition_{‘max_iter’, ‘no_change’, ‘all_labeled’}
The reason that fitting was stopped. ‘max_iter’: n_iter_ reached max_iter. ‘no_change’: no new labels were predicted. ‘all_labeled’: all unlabeled samples were labeled before max_iter was reached. References David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics (ACL ‘95). Association for Computational Linguistics, Stroudsburg, PA, USA, 189-196. DOI: https://doi.org/10.3115/981658.981684 Examples >>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import SelfTrainingClassifier
>>> from sklearn.svm import SVC
>>> rng = np.random.RandomState(42)
>>> iris = datasets.load_iris()
>>> random_unlabeled_points = rng.rand(iris.target.shape[0]) < 0.3
>>> iris.target[random_unlabeled_points] = -1
>>> svc = SVC(probability=True, gamma="auto")
>>> self_training_model = SelfTrainingClassifier(svc)
>>> self_training_model.fit(iris.data, iris.target)
SelfTrainingClassifier(...)
Methods
decision_function(X) Calls decision function of the base_estimator.
fit(X, y) Fits this SelfTrainingClassifier to a dataset.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict the classes of X.
predict_log_proba(X) Predict log probability for each possible outcome.
predict_proba(X) Predict probability for each possible outcome.
score(X, y) Calls score on the base_estimator.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Calls decision function of the base_estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Result of the decision function of the base_estimator.
fit(X, y) [source]
Fits this SelfTrainingClassifier to a dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y{array-like, sparse matrix} of shape (n_samples,)
Array representing the labels. Unlabeled samples should have the label -1. Returns
selfobject
Returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict the classes of X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples,)
Array with predicted labels.
predict_log_proba(X) [source]
Predict log probability for each possible outcome. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Array with log prediction probabilities.
predict_proba(X) [source]
Predict probability for each possible outcome. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Array with prediction probabilities.
score(X, y) [source]
Calls score on the base_estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
yarray-like of shape (n_samples,)
Array representing the labels. Returns
scorefloat
Result of calling score on the base_estimator.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier |
sklearn.semi_supervised.SelfTrainingClassifier
class sklearn.semi_supervised.SelfTrainingClassifier(base_estimator, threshold=0.75, criterion='threshold', k_best=10, max_iter=10, verbose=False) [source]
Self-training classifier. This class allows a given supervised classifier to function as a semi-supervised classifier, allowing it to learn from unlabeled data. It does this by iteratively predicting pseudo-labels for the unlabeled data and adding them to the training set. The classifier will continue iterating until either max_iter is reached, or no pseudo-labels were added to the training set in the previous iteration. Read more in the User Guide. Parameters
base_estimatorestimator object
An estimator object implementing fit and predict_proba. Invoking the fit method will fit a clone of the passed estimator, which will be stored in the base_estimator_ attribute.
criterion{‘threshold’, ‘k_best’}, default=’threshold’
The selection criterion used to select which labels to add to the training set. If ‘threshold’, pseudo-labels with prediction probabilities above threshold are added to the dataset. If ‘k_best’, the k_best pseudo-labels with highest prediction probabilities are added to the dataset. When using the ‘threshold’ criterion, a well calibrated classifier should be used.
thresholdfloat, default=0.75
The decision threshold for use with criterion='threshold'. Should be in [0, 1). When using the ‘threshold’ criterion, a well calibrated classifier should be used.
k_bestint, default=10
The amount of samples to add in each iteration. Only used when criterion is k_best’.
max_iterint or None, default=10
Maximum number of iterations allowed. Should be greater than or equal to 0. If it is None, the classifier will continue to predict labels until no new pseudo-labels are added, or all unlabeled samples have been labeled. verbose: bool, default=False
Enable verbose output. Attributes
base_estimator_estimator object
The fitted estimator.
classes_ndarray or list of ndarray of shape (n_classes,)
Class labels for each output. (Taken from the trained base_estimator_).
transduction_ndarray of shape (n_samples,)
The labels used for the final fit of the classifier, including pseudo-labels added during fit.
labeled_iter_ndarray of shape (n_samples,)
The iteration in which each sample was labeled. When a sample has iteration 0, the sample was already labeled in the original dataset. When a sample has iteration -1, the sample was not labeled in any iteration.
n_iter_int
The number of rounds of self-training, that is the number of times the base estimator is fitted on relabeled variants of the training set.
termination_condition_{‘max_iter’, ‘no_change’, ‘all_labeled’}
The reason that fitting was stopped. ‘max_iter’: n_iter_ reached max_iter. ‘no_change’: no new labels were predicted. ‘all_labeled’: all unlabeled samples were labeled before max_iter was reached. References David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics (ACL ‘95). Association for Computational Linguistics, Stroudsburg, PA, USA, 189-196. DOI: https://doi.org/10.3115/981658.981684 Examples >>> import numpy as np
>>> from sklearn import datasets
>>> from sklearn.semi_supervised import SelfTrainingClassifier
>>> from sklearn.svm import SVC
>>> rng = np.random.RandomState(42)
>>> iris = datasets.load_iris()
>>> random_unlabeled_points = rng.rand(iris.target.shape[0]) < 0.3
>>> iris.target[random_unlabeled_points] = -1
>>> svc = SVC(probability=True, gamma="auto")
>>> self_training_model = SelfTrainingClassifier(svc)
>>> self_training_model.fit(iris.data, iris.target)
SelfTrainingClassifier(...)
Methods
decision_function(X) Calls decision function of the base_estimator.
fit(X, y) Fits this SelfTrainingClassifier to a dataset.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict the classes of X.
predict_log_proba(X) Predict log probability for each possible outcome.
predict_proba(X) Predict probability for each possible outcome.
score(X, y) Calls score on the base_estimator.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Calls decision function of the base_estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Result of the decision function of the base_estimator.
fit(X, y) [source]
Fits this SelfTrainingClassifier to a dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y{array-like, sparse matrix} of shape (n_samples,)
Array representing the labels. Unlabeled samples should have the label -1. Returns
selfobject
Returns an instance of self.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict the classes of X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples,)
Array with predicted labels.
predict_log_proba(X) [source]
Predict log probability for each possible outcome. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Array with log prediction probabilities.
predict_proba(X) [source]
Predict probability for each possible outcome. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Array with prediction probabilities.
score(X, y) [source]
Calls score on the base_estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
yarray-like of shape (n_samples,)
Array representing the labels. Returns
scorefloat
Result of calling score on the base_estimator.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.semi_supervised.SelfTrainingClassifier
Release Highlights for scikit-learn 0.24
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset
Effect of varying threshold for self-training
Semi-supervised Classification on a Text Dataset | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier |
decision_function(X) [source]
Calls decision function of the base_estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Result of the decision function of the base_estimator. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.decision_function |
fit(X, y) [source]
Fits this SelfTrainingClassifier to a dataset. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
y{array-like, sparse matrix} of shape (n_samples,)
Array representing the labels. Unlabeled samples should have the label -1. Returns
selfobject
Returns an instance of self. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.get_params |
predict(X) [source]
Predict the classes of X. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples,)
Array with predicted labels. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.predict |
predict_log_proba(X) [source]
Predict log probability for each possible outcome. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Array with log prediction probabilities. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.predict_log_proba |
predict_proba(X) [source]
Predict probability for each possible outcome. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data. Returns
yndarray of shape (n_samples, n_features)
Array with prediction probabilities. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.predict_proba |
score(X, y) [source]
Calls score on the base_estimator. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Array representing the data.
yarray-like of shape (n_samples,)
Array representing the labels. Returns
scorefloat
Result of calling score on the base_estimator. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.semi_supervised.selftrainingclassifier#sklearn.semi_supervised.SelfTrainingClassifier.set_params |
sklearn.set_config(assume_finite=None, working_memory=None, print_changed_only=None, display=None) [source]
Set global scikit-learn configuration New in version 0.19. Parameters
assume_finitebool, default=None
If True, validation for finiteness will be skipped, saving time, but leading to potential crashes. If False, validation for finiteness will be performed, avoiding error. Global default: False. New in version 0.19.
working_memoryint, default=None
If set, scikit-learn will attempt to limit the size of temporary arrays to this number of MiB (per job when parallelised), often saving both computation time and memory on expensive operations that can be performed in chunks. Global default: 1024. New in version 0.20.
print_changed_onlybool, default=None
If True, only the parameters that were set to non-default values will be printed when printing an estimator. For example, print(SVC()) while True will only print ‘SVC()’ while the default behaviour would be to print ‘SVC(C=1.0, cache_size=200, …)’ with all the non-changed parameters. New in version 0.21.
display{‘text’, ‘diagram’}, default=None
If ‘diagram’, estimators will be displayed as a diagram in a Jupyter lab or notebook context. If ‘text’, estimators will be displayed as text. Default is ‘text’. New in version 0.23. See also
config_context
Context manager for global scikit-learn configuration.
get_config
Retrieve current values of the global configuration. | sklearn.modules.generated.sklearn.set_config#sklearn.set_config |
sklearn.show_versions() [source]
Print useful debugging information” New in version 0.20. | sklearn.modules.generated.sklearn.show_versions#sklearn.show_versions |
sklearn.base.clone
sklearn.base.clone(estimator, *, safe=True) [source]
Constructs a new unfitted estimator with the same parameters. Clone does a deep copy of the model in an estimator without actually copying attached data. It yields a new estimator with the same parameters that has not been fitted on any data. If the estimator’s random_state parameter is an integer (or if the estimator doesn’t have a random_state parameter), an exact clone is returned: the clone and the original estimator will give the exact same results. Otherwise, statistical clone is returned: the clone might yield different results from the original estimator. More details can be found in Controlling randomness. Parameters
estimator{list, tuple, set} of estimator instance or a single estimator instance
The estimator or group of estimators to be cloned.
safebool, default=True
If safe is False, clone will fall back to a deep copy on objects that are not estimators. | sklearn.modules.generated.sklearn.base.clone |
sklearn.base.is_classifier
sklearn.base.is_classifier(estimator) [source]
Return True if the given estimator is (probably) a classifier. Parameters
estimatorobject
Estimator object to test. Returns
outbool
True if estimator is a classifier and False otherwise. | sklearn.modules.generated.sklearn.base.is_classifier |
sklearn.base.is_regressor
sklearn.base.is_regressor(estimator) [source]
Return True if the given estimator is (probably) a regressor. Parameters
estimatorestimator instance
Estimator object to test. Returns
outbool
True if estimator is a regressor and False otherwise. | sklearn.modules.generated.sklearn.base.is_regressor |
sklearn.calibration.calibration_curve
sklearn.calibration.calibration_curve(y_true, y_prob, *, normalize=False, n_bins=5, strategy='uniform') [source]
Compute true and predicted probabilities for a calibration curve. The method assumes the inputs come from a binary classifier, and discretize the [0, 1] interval into bins. Calibration curves may also be referred to as reliability diagrams. Read more in the User Guide. Parameters
y_truearray-like of shape (n_samples,)
True targets.
y_probarray-like of shape (n_samples,)
Probabilities of the positive class.
normalizebool, default=False
Whether y_prob needs to be normalized into the [0, 1] interval, i.e. is not a proper probability. If True, the smallest value in y_prob is linearly mapped onto 0 and the largest one onto 1.
n_binsint, default=5
Number of bins to discretize the [0, 1] interval. A bigger number requires more data. Bins with no samples (i.e. without corresponding values in y_prob) will not be returned, thus the returned arrays may have less than n_bins values.
strategy{‘uniform’, ‘quantile’}, default=’uniform’
Strategy used to define the widths of the bins. uniform
The bins have identical widths. quantile
The bins have the same number of samples and depend on y_prob. Returns
prob_truendarray of shape (n_bins,) or smaller
The proportion of samples whose class is the positive class, in each bin (fraction of positives).
prob_predndarray of shape (n_bins,) or smaller
The mean predicted probability in each bin. References Alexandru Niculescu-Mizil and Rich Caruana (2005) Predicting Good Probabilities With Supervised Learning, in Proceedings of the 22nd International Conference on Machine Learning (ICML). See section 4 (Qualitative Analysis of Predictions). Examples >>> import numpy as np
>>> from sklearn.calibration import calibration_curve
>>> y_true = np.array([0, 0, 0, 0, 1, 1, 1, 1, 1])
>>> y_pred = np.array([0.1, 0.2, 0.3, 0.4, 0.65, 0.7, 0.8, 0.9, 1.])
>>> prob_true, prob_pred = calibration_curve(y_true, y_pred, n_bins=3)
>>> prob_true
array([0. , 0.5, 1. ])
>>> prob_pred
array([0.2 , 0.525, 0.85 ])
Examples using sklearn.calibration.calibration_curve
Comparison of Calibration of Classifiers
Probability Calibration curves | sklearn.modules.generated.sklearn.calibration.calibration_curve |
sklearn.cluster.affinity_propagation
sklearn.cluster.affinity_propagation(S, *, preference=None, convergence_iter=15, max_iter=200, damping=0.5, copy=True, verbose=False, return_n_iter=False, random_state='warn') [source]
Perform Affinity Propagation Clustering of data. Read more in the User Guide. Parameters
Sarray-like of shape (n_samples, n_samples)
Matrix of similarities between points.
preferencearray-like of shape (n_samples,) or float, default=None
Preferences for each point - points with larger values of preferences are more likely to be chosen as exemplars. The number of exemplars, i.e. of clusters, is influenced by the input preferences value. If the preferences are not passed as arguments, they will be set to the median of the input similarities (resulting in a moderate number of clusters). For a smaller amount of clusters, this can be set to the minimum value of the similarities.
convergence_iterint, default=15
Number of iterations with no change in the number of estimated clusters that stops the convergence.
max_iterint, default=200
Maximum number of iterations
dampingfloat, default=0.5
Damping factor between 0.5 and 1.
copybool, default=True
If copy is False, the affinity matrix is modified inplace by the algorithm, for memory efficiency.
verbosebool, default=False
The verbosity level.
return_n_iterbool, default=False
Whether or not to return the number of iterations.
random_stateint, RandomState instance or None, default=0
Pseudo-random number generator to control the starting state. Use an int for reproducible results across function calls. See the Glossary. New in version 0.23: this parameter was previously hardcoded as 0. Returns
cluster_centers_indicesndarray of shape (n_clusters,)
Index of clusters centers.
labelsndarray of shape (n_samples,)
Cluster labels for each point.
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. Notes For an example, see examples/cluster/plot_affinity_propagation.py. When the algorithm does not converge, it returns an empty array as cluster_center_indices and -1 as label for each training sample. When all training samples have equal similarities and equal preferences, the assignment of cluster centers and labels depends on the preference. If the preference is smaller than the similarities, a single cluster center and label 0 for every sample will be returned. Otherwise, every training sample becomes its own cluster center and is assigned a unique label. References Brendan J. Frey and Delbert Dueck, “Clustering by Passing Messages Between Data Points”, Science Feb. 2007
Examples using sklearn.cluster.affinity_propagation
Visualizing the stock market structure | sklearn.modules.generated.sklearn.cluster.affinity_propagation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.