doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sklearn.preprocessing.maxabs_scale
sklearn.preprocessing.maxabs_scale(X, *, axis=0, copy=True) [source]
Scale each feature to the [-1, 1] range without breaking the sparsity. This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. This scaler can also be applied to sparse CSR or CSC matrices. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data.
axisint, default=0
axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample.
copybool, default=True
Set to False to perform inplace scaling and avoid a copy (if the input is already a numpy array). Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. Warning Risk of data leak Do not use maxabs_scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using MaxAbsScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(MaxAbsScaler(), LogisticRegression()). See also
MaxAbsScaler
Performs scaling to the [-1, 1] range using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. | sklearn.modules.generated.sklearn.preprocessing.maxabs_scale |
sklearn.preprocessing.minmax_scale
sklearn.preprocessing.minmax_scale(X, feature_range=0, 1, *, axis=0, copy=True) [source]
Transform features by scaling each feature to a given range. This estimator scales and translates each feature individually such that it is in the given range on the training set, i.e. between zero and one. The transformation is given by (when axis=0): X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
where min, max = feature_range. The transformation is calculated as (when axis=0): X_scaled = scale * X + min - X.min(axis=0) * scale
where scale = (max - min) / (X.max(axis=0) - X.min(axis=0))
This transformation is often used as an alternative to zero mean, unit variance scaling. Read more in the User Guide. New in version 0.17: minmax_scale function interface to MinMaxScaler. Parameters
Xarray-like of shape (n_samples, n_features)
The data.
feature_rangetuple (min, max), default=(0, 1)
Desired range of transformed data.
axisint, default=0
Axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample.
copybool, default=True
Set to False to perform inplace scaling and avoid a copy (if the input is already a numpy array). Returns
X_trndarray of shape (n_samples, n_features)
The transformed data. Warning Risk of data leak Do not use minmax_scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using MinMaxScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(MinMaxScaler(), LogisticRegression()). See also
MinMaxScaler
Performs scaling to a given range using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py.
Examples using sklearn.preprocessing.minmax_scale
Compare the effect of different scalers on data with outliers | sklearn.modules.generated.sklearn.preprocessing.minmax_scale |
sklearn.preprocessing.normalize
sklearn.preprocessing.normalize(X, norm='l2', *, axis=1, copy=True, return_norm=False) [source]
Scale input vectors individually to unit norm (vector length). Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to normalize, element by element. scipy.sparse matrices should be in CSR format to avoid an un-necessary copy.
norm{‘l1’, ‘l2’, ‘max’}, default=’l2’
The norm to use to normalize each non zero sample (or each non-zero feature if axis is 0).
axis{0, 1}, default=1
axis used to normalize the data along. If 1, independently normalize each sample, otherwise (if 0) normalize each feature.
copybool, default=True
set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSR matrix and if axis is 1).
return_normbool, default=False
whether to return the computed norms Returns
X{ndarray, sparse matrix} of shape (n_samples, n_features)
Normalized input X.
normsndarray of shape (n_samples, ) if axis=1 else (n_features, )
An array of norms along given axis for X. When X is sparse, a NotImplementedError will be raised for norm ‘l1’ or ‘l2’. See also
Normalizer
Performs normalization using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. | sklearn.modules.generated.sklearn.preprocessing.normalize |
sklearn.preprocessing.power_transform
sklearn.preprocessing.power_transform(X, method='yeo-johnson', *, standardize=True, copy=True) [source]
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired. Currently, power_transform supports the Box-Cox transform and the Yeo-Johnson transform. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood. Box-Cox requires input data to be strictly positive, while Yeo-Johnson supports both positive or negative data. By default, zero-mean, unit-variance normalization is applied to the transformed data. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
The data to be transformed using a power transformation.
method{‘yeo-johnson’, ‘box-cox’}, default=’yeo-johnson’
The power transform method. Available methods are: ‘yeo-johnson’ [1], works with positive and negative values ‘box-cox’ [2], only works with strictly positive values Changed in version 0.23: The default value of the method parameter changed from ‘box-cox’ to ‘yeo-johnson’ in 0.23.
standardizebool, default=True
Set to True to apply zero-mean, unit-variance normalization to the transformed output.
copybool, default=True
Set to False to perform inplace computation during transformation. Returns
X_transndarray of shape (n_samples, n_features)
The transformed data. See also
PowerTransformer
Equivalent transformation with the Transformer API (e.g. as part of a preprocessing Pipeline).
quantile_transform
Maps data to a standard normal distribution with the parameter output_distribution='normal'. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. References
1
I.K. Yeo and R.A. Johnson, “A new family of power transformations to improve normality or symmetry.” Biometrika, 87(4), pp.954-959, (2000).
2
G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211-252 (1964). Examples >>> import numpy as np
>>> from sklearn.preprocessing import power_transform
>>> data = [[1, 2], [3, 2], [4, 5]]
>>> print(power_transform(data, method='box-cox'))
[[-1.332... -0.707...]
[ 0.256... -0.707...]
[ 1.076... 1.414...]]
Warning Risk of data leak. Do not use power_transform unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using PowerTransformer within a Pipeline in order to prevent most risks of data leaking, e.g.: pipe = make_pipeline(PowerTransformer(),
LogisticRegression()). | sklearn.modules.generated.sklearn.preprocessing.power_transform |
sklearn.preprocessing.quantile_transform
sklearn.preprocessing.quantile_transform(X, *, axis=0, n_quantiles=1000, output_distribution='uniform', ignore_implicit_zeros=False, subsample=100000, random_state=None, copy=True) [source]
Transform features using quantiles information. This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme. The transformation is applied on each feature independently. First an estimate of the cumulative distribution function of a feature is used to map the original values to a uniform distribution. The obtained values are then mapped to the desired output distribution using the associated quantile function. Features values of new/unseen data that fall below or above the fitted range will be mapped to the bounds of the output distribution. Note that this transform is non-linear. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to transform.
axisint, default=0
Axis used to compute the means and standard deviations along. If 0, transform each feature, otherwise (if 1) transform each sample.
n_quantilesint, default=1000 or n_samples
Number of quantiles to be computed. It corresponds to the number of landmarks used to discretize the cumulative distribution function. If n_quantiles is larger than the number of samples, n_quantiles is set to the number of samples as a larger number of quantiles does not give a better approximation of the cumulative distribution function estimator.
output_distribution{‘uniform’, ‘normal’}, default=’uniform’
Marginal distribution for the transformed data. The choices are ‘uniform’ (default) or ‘normal’.
ignore_implicit_zerosbool, default=False
Only applies to sparse matrices. If True, the sparse entries of the matrix are discarded to compute the quantile statistics. If False, these entries are treated as zeros.
subsampleint, default=1e5
Maximum number of samples used to estimate the quantiles for computational efficiency. Note that the subsampling procedure may differ for value-identical sparse and dense matrices.
random_stateint, RandomState instance or None, default=None
Determines random number generation for subsampling and smoothing noise. Please see subsample for more details. Pass an int for reproducible results across multiple function calls. See Glossary
copybool, default=True
Set to False to perform inplace transformation and avoid a copy (if the input is already a numpy array). If True, a copy of X is transformed, leaving the original X unchanged ..versionchanged:: 0.23
The default value of copy changed from False to True in 0.23. Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. See also
QuantileTransformer
Performs quantile-based scaling using the Transformer API (e.g. as part of a preprocessing Pipeline).
power_transform
Maps data to a normal distribution using a power transformation.
scale
Performs standardization that is faster, but less robust to outliers.
robust_scale
Performs robust standardization that removes the influence of outliers but does not put outliers and inliers on the same scale. Notes NaNs are treated as missing values: disregarded in fit, and maintained in transform. Warning Risk of data leak Do not use quantile_transform unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using QuantileTransformer within a Pipeline in order to prevent most risks of data leaking:pipe = make_pipeline(QuantileTransformer(),
LogisticRegression()). For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Examples >>> import numpy as np
>>> from sklearn.preprocessing import quantile_transform
>>> rng = np.random.RandomState(0)
>>> X = np.sort(rng.normal(loc=0.5, scale=0.25, size=(25, 1)), axis=0)
>>> quantile_transform(X, n_quantiles=10, random_state=0, copy=True)
array([...])
Examples using sklearn.preprocessing.quantile_transform
Effect of transforming the targets in regression model | sklearn.modules.generated.sklearn.preprocessing.quantile_transform |
sklearn.preprocessing.robust_scale
sklearn.preprocessing.robust_scale(X, *, axis=0, with_centering=True, with_scaling=True, quantile_range=25.0, 75.0, copy=True, unit_variance=False) [source]
Standardize a dataset along any axis Center to the median and component wise scale according to the interquartile range. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_sample, n_features)
The data to center and scale.
axisint, default=0
axis used to compute the medians and IQR along. If 0, independently scale each feature, otherwise (if 1) scale each sample.
with_centeringbool, default=True
If True, center the data before scaling.
with_scalingbool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation).
quantile_rangetuple (q_min, q_max), 0.0 < q_min < q_max < 100.0
default=(25.0, 75.0), == (1st quantile, 3rd quantile), == IQR Quantile range used to calculate scale_. New in version 0.18.
copybool, default=True
set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSR matrix and if axis is 1).
unit_variancebool, default=False
If True, scale data so that normally distributed features have a variance of 1. In general, if the difference between the x-values of q_max and q_min for a standard normal distribution is greater than 1, the dataset will be scaled down. If less than 1, the dataset will be scaled up. New in version 0.24. Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. See also
RobustScaler
Performs centering and scaling using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly with_centering=False (in that case, only variance scaling will be performed on the features of the CSR matrix) or to call X.toarray() if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSR matrix. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Warning Risk of data leak Do not use robust_scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using RobustScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(RobustScaler(), LogisticRegression()). | sklearn.modules.generated.sklearn.preprocessing.robust_scale |
sklearn.preprocessing.scale
sklearn.preprocessing.scale(X, *, axis=0, with_mean=True, with_std=True, copy=True) [source]
Standardize a dataset along any axis. Center to the mean and component wise scale to unit variance. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to center and scale.
axisint, default=0
axis used to compute the means and standard deviations along. If 0, independently standardize each feature, otherwise (if 1) standardize each sample.
with_meanbool, default=True
If True, center the data before scaling.
with_stdbool, default=True
If True, scale the data to unit variance (or equivalently, unit standard deviation).
copybool, default=True
set to False to perform inplace row normalization and avoid a copy (if the input is already a numpy array or a scipy.sparse CSC matrix and if axis is 1). Returns
X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)
The transformed data. See also
StandardScaler
Performs scaling to unit variance using the Transformer API (e.g. as part of a preprocessing Pipeline). Notes This implementation will refuse to center scipy.sparse matrices since it would make them non-sparse and would potentially crash the program with memory exhaustion problems. Instead the caller is expected to either set explicitly with_mean=False (in that case, only variance scaling will be performed on the features of the CSC matrix) or to call X.toarray() if he/she expects the materialized dense array to fit in memory. To avoid memory copy the caller should pass a CSC matrix. NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation. We use a biased estimator for the standard deviation, equivalent to numpy.std(x, ddof=0). Note that the choice of ddof is unlikely to affect model performance. For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py. Warning Risk of data leak Do not use scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using StandardScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(StandardScaler(), LogisticRegression()). | sklearn.modules.generated.sklearn.preprocessing.scale |
sklearn.random_projection.johnson_lindenstrauss_min_dim
sklearn.random_projection.johnson_lindenstrauss_min_dim(n_samples, *, eps=0.1) [source]
Find a ‘safe’ number of components to randomly project to. The distortion introduced by a random projection p only changes the distance between two points by a factor (1 +- eps) in an euclidean space with good probability. The projection p is an eps-embedding as defined by: (1 - eps) ||u - v||^2 < ||p(u) - p(v)||^2 < (1 + eps) ||u - v||^2 Where u and v are any rows taken from a dataset of shape (n_samples, n_features), eps is in ]0, 1[ and p is a projection by a random Gaussian N(0, 1) matrix of shape (n_components, n_features) (or a sparse Achlioptas matrix). The minimum number of components to guarantee the eps-embedding is given by: n_components >= 4 log(n_samples) / (eps^2 / 2 - eps^3 / 3) Note that the number of dimensions is independent of the original number of features but instead depends on the size of the dataset: the larger the dataset, the higher is the minimal dimensionality of an eps-embedding. Read more in the User Guide. Parameters
n_samplesint or array-like of int
Number of samples that should be a integer greater than 0. If an array is given, it will compute a safe number of components array-wise.
epsfloat or ndarray of shape (n_components,), dtype=float, default=0.1
Maximum distortion rate in the range (0,1 ) as defined by the Johnson-Lindenstrauss lemma. If an array is given, it will compute a safe number of components array-wise. Returns
n_componentsint or ndarray of int
The minimal number of components to guarantee with good probability an eps-embedding with n_samples. References
1
https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_lemma
2
Sanjoy Dasgupta and Anupam Gupta, 1999, “An elementary proof of the Johnson-Lindenstrauss Lemma.” http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.3654 Examples >>> johnson_lindenstrauss_min_dim(1e6, eps=0.5)
663
>>> johnson_lindenstrauss_min_dim(1e6, eps=[0.5, 0.1, 0.01])
array([ 663, 11841, 1112658])
>>> johnson_lindenstrauss_min_dim([1e4, 1e5, 1e6], eps=0.1)
array([ 7894, 9868, 11841])
Examples using sklearn.random_projection.johnson_lindenstrauss_min_dim
The Johnson-Lindenstrauss bound for embedding with random projections | sklearn.modules.generated.sklearn.random_projection.johnson_lindenstrauss_min_dim |
sklearn.set_config
sklearn.set_config(assume_finite=None, working_memory=None, print_changed_only=None, display=None) [source]
Set global scikit-learn configuration New in version 0.19. Parameters
assume_finitebool, default=None
If True, validation for finiteness will be skipped, saving time, but leading to potential crashes. If False, validation for finiteness will be performed, avoiding error. Global default: False. New in version 0.19.
working_memoryint, default=None
If set, scikit-learn will attempt to limit the size of temporary arrays to this number of MiB (per job when parallelised), often saving both computation time and memory on expensive operations that can be performed in chunks. Global default: 1024. New in version 0.20.
print_changed_onlybool, default=None
If True, only the parameters that were set to non-default values will be printed when printing an estimator. For example, print(SVC()) while True will only print ‘SVC()’ while the default behaviour would be to print ‘SVC(C=1.0, cache_size=200, …)’ with all the non-changed parameters. New in version 0.21.
display{‘text’, ‘diagram’}, default=None
If ‘diagram’, estimators will be displayed as a diagram in a Jupyter lab or notebook context. If ‘text’, estimators will be displayed as text. Default is ‘text’. New in version 0.23. See also
config_context
Context manager for global scikit-learn configuration.
get_config
Retrieve current values of the global configuration.
Examples using sklearn.set_config
Release Highlights for scikit-learn 0.23
Compact estimator representations
Column Transformer with Mixed Types | sklearn.modules.generated.sklearn.set_config |
sklearn.show_versions
sklearn.show_versions() [source]
Print useful debugging information” New in version 0.20. | sklearn.modules.generated.sklearn.show_versions |
sklearn.svm.l1_min_c
sklearn.svm.l1_min_c(X, y, *, loss='squared_hinge', fit_intercept=True, intercept_scaling=1.0) [source]
Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=’l1’ and linear_model.LogisticRegression with penalty=’l1’. This value is valid if class_weight parameter in fit() is not set. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
loss{‘squared_hinge’, ‘log’}, default=’squared_hinge’
Specifies the loss function. With ‘squared_hinge’ it is the squared hinge loss (a.k.a. L2 loss). With ‘log’ it is the loss of logistic regression models.
fit_interceptbool, default=True
Specifies if the intercept should be fitted by the model. It must match the fit() method parameter.
intercept_scalingfloat, default=1.0
when fit_intercept is True, instance vector x becomes [x, intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. It must match the fit() method parameter. Returns
l1_min_cfloat
minimum value for C
Examples using sklearn.svm.l1_min_c
Regularization path of L1- Logistic Regression | sklearn.modules.generated.sklearn.svm.l1_min_c |
sklearn.tree.export_graphviz
sklearn.tree.export_graphviz(decision_tree, out_file=None, *, max_depth=None, feature_names=None, class_names=None, label='all', filled=False, leaves_parallel=False, impurity=True, node_ids=False, proportion=False, rotate=False, rounded=False, special_characters=False, precision=3) [source]
Export a decision tree in DOT format. This function generates a GraphViz representation of the decision tree, which is then written into out_file. Once exported, graphical renderings can be generated using, for example: $ dot -Tps tree.dot -o tree.ps (PostScript format)
$ dot -Tpng tree.dot -o tree.png (PNG format)
The sample counts that are shown are weighted with any sample_weights that might be present. Read more in the User Guide. Parameters
decision_treedecision tree classifier
The decision tree to be exported to GraphViz.
out_fileobject or str, default=None
Handle or name of the output file. If None, the result is returned as a string. Changed in version 0.20: Default of out_file changed from “tree.dot” to None.
max_depthint, default=None
The maximum depth of the representation. If None, the tree is fully generated.
feature_nameslist of str, default=None
Names of each of the features. If None generic names will be used (“feature_0”, “feature_1”, …).
class_nameslist of str or bool, default=None
Names of each of the target classes in ascending numerical order. Only relevant for classification and not supported for multi-output. If True, shows a symbolic representation of the class name.
label{‘all’, ‘root’, ‘none’}, default=’all’
Whether to show informative labels for impurity, etc. Options include ‘all’ to show at every node, ‘root’ to show only at the top root node, or ‘none’ to not show at any node.
filledbool, default=False
When set to True, paint nodes to indicate majority class for classification, extremity of values for regression, or purity of node for multi-output.
leaves_parallelbool, default=False
When set to True, draw all leaf nodes at the bottom of the tree.
impuritybool, default=True
When set to True, show the impurity at each node.
node_idsbool, default=False
When set to True, show the ID number on each node.
proportionbool, default=False
When set to True, change the display of ‘values’ and/or ‘samples’ to be proportions and percentages respectively.
rotatebool, default=False
When set to True, orient tree left to right rather than top-down.
roundedbool, default=False
When set to True, draw node boxes with rounded corners and use Helvetica fonts instead of Times-Roman.
special_charactersbool, default=False
When set to False, ignore special characters for PostScript compatibility.
precisionint, default=3
Number of digits of precision for floating point in the values of impurity, threshold and value attributes of each node. Returns
dot_datastring
String representation of the input tree in GraphViz dot format. Only returned if out_file is None. New in version 0.18. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn import tree
>>> clf = tree.DecisionTreeClassifier()
>>> iris = load_iris()
>>> clf = clf.fit(iris.data, iris.target)
>>> tree.export_graphviz(clf)
'digraph Tree {... | sklearn.modules.generated.sklearn.tree.export_graphviz |
sklearn.tree.export_text
sklearn.tree.export_text(decision_tree, *, feature_names=None, max_depth=10, spacing=3, decimals=2, show_weights=False) [source]
Build a text report showing the rules of a decision tree. Note that backwards compatibility may not be supported. Parameters
decision_treeobject
The decision tree estimator to be exported. It can be an instance of DecisionTreeClassifier or DecisionTreeRegressor.
feature_nameslist of str, default=None
A list of length n_features containing the feature names. If None generic names will be used (“feature_0”, “feature_1”, …).
max_depthint, default=10
Only the first max_depth levels of the tree are exported. Truncated branches will be marked with “…”.
spacingint, default=3
Number of spaces between edges. The higher it is, the wider the result.
decimalsint, default=2
Number of decimal digits to display.
show_weightsbool, default=False
If true the classification weights will be exported on each leaf. The classification weights are the number of samples each class. Returns
reportstring
Text summary of all the rules in the decision tree. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn.tree import DecisionTreeClassifier
>>> from sklearn.tree import export_text
>>> iris = load_iris()
>>> X = iris['data']
>>> y = iris['target']
>>> decision_tree = DecisionTreeClassifier(random_state=0, max_depth=2)
>>> decision_tree = decision_tree.fit(X, y)
>>> r = export_text(decision_tree, feature_names=iris['feature_names'])
>>> print(r)
|--- petal width (cm) <= 0.80
| |--- class: 0
|--- petal width (cm) > 0.80
| |--- petal width (cm) <= 1.75
| | |--- class: 1
| |--- petal width (cm) > 1.75
| | |--- class: 2 | sklearn.modules.generated.sklearn.tree.export_text |
sklearn.tree.plot_tree
sklearn.tree.plot_tree(decision_tree, *, max_depth=None, feature_names=None, class_names=None, label='all', filled=False, impurity=True, node_ids=False, proportion=False, rotate='deprecated', rounded=False, precision=3, ax=None, fontsize=None) [source]
Plot a decision tree. The sample counts that are shown are weighted with any sample_weights that might be present. The visualization is fit automatically to the size of the axis. Use the figsize or dpi arguments of plt.figure to control the size of the rendering. Read more in the User Guide. New in version 0.21. Parameters
decision_treedecision tree regressor or classifier
The decision tree to be plotted.
max_depthint, default=None
The maximum depth of the representation. If None, the tree is fully generated.
feature_nameslist of strings, default=None
Names of each of the features. If None, generic names will be used (“X[0]”, “X[1]”, …).
class_nameslist of str or bool, default=None
Names of each of the target classes in ascending numerical order. Only relevant for classification and not supported for multi-output. If True, shows a symbolic representation of the class name.
label{‘all’, ‘root’, ‘none’}, default=’all’
Whether to show informative labels for impurity, etc. Options include ‘all’ to show at every node, ‘root’ to show only at the top root node, or ‘none’ to not show at any node.
filledbool, default=False
When set to True, paint nodes to indicate majority class for classification, extremity of values for regression, or purity of node for multi-output.
impuritybool, default=True
When set to True, show the impurity at each node.
node_idsbool, default=False
When set to True, show the ID number on each node.
proportionbool, default=False
When set to True, change the display of ‘values’ and/or ‘samples’ to be proportions and percentages respectively.
rotatebool, default=False
This parameter has no effect on the matplotlib tree visualisation and it is kept here for backward compatibility. Deprecated since version 0.23: rotate is deprecated in 0.23 and will be removed in 1.0 (renaming of 0.25).
roundedbool, default=False
When set to True, draw node boxes with rounded corners and use Helvetica fonts instead of Times-Roman.
precisionint, default=3
Number of digits of precision for floating point in the values of impurity, threshold and value attributes of each node.
axmatplotlib axis, default=None
Axes to plot to. If None, use current axis. Any previous content is cleared.
fontsizeint, default=None
Size of text font. If None, determined automatically to fit figure. Returns
annotationslist of artists
List containing the artists for the annotation boxes making up the tree. Examples >>> from sklearn.datasets import load_iris
>>> from sklearn import tree
>>> clf = tree.DecisionTreeClassifier(random_state=0)
>>> iris = load_iris()
>>> clf = clf.fit(iris.data, iris.target)
>>> tree.plot_tree(clf)
[Text(251.5,345.217,'X[3] <= 0.8...
Examples using sklearn.tree.plot_tree
Plot the decision surface of a decision tree on the iris dataset
Understanding the decision tree structure | sklearn.modules.generated.sklearn.tree.plot_tree |
sklearn.utils.all_estimators
sklearn.utils.all_estimators(type_filter=None) [source]
Get a list of all estimators from sklearn. This function crawls the module and gets all classes that inherit from BaseEstimator. Classes that are defined in test-modules are not included. Parameters
type_filter{“classifier”, “regressor”, “cluster”, “transformer”} or list of such str, default=None
Which kind of estimators should be returned. If None, no filter is applied and all estimators are returned. Possible values are ‘classifier’, ‘regressor’, ‘cluster’ and ‘transformer’ to get estimators only of these specific types, or a list of these to get the estimators that fit at least one of the types. Returns
estimatorslist of tuples
List of (name, class), where name is the class name as string and class is the actuall type of the class. | sklearn.modules.generated.sklearn.utils.all_estimators |
sklearn.utils.arrayfuncs.min_pos
sklearn.utils.arrayfuncs.min_pos()
Find the minimum value of an array over positive values Returns a huge value if none of the values are positive | sklearn.modules.generated.sklearn.utils.arrayfuncs.min_pos |
sklearn.utils.assert_all_finite
sklearn.utils.assert_all_finite(X, *, allow_nan=False) [source]
Throw a ValueError if X contains NaN or infinity. Parameters
X{ndarray, sparse matrix}
allow_nanbool, default=False | sklearn.modules.generated.sklearn.utils.assert_all_finite |
sklearn.utils.as_float_array
sklearn.utils.as_float_array(X, *, copy=True, force_all_finite=True) [source]
Converts an array-like to an array of floats. The new dtype will be np.float32 or np.float64, depending on the original type. The function can create a copy or modify the argument depending on the argument copy. Parameters
X{array-like, sparse matrix}
copybool, default=True
If True, a copy of X will be created. If False, a copy may still be returned if X’s dtype is not a floating point type.
force_all_finitebool or ‘allow-nan’, default=True
Whether to raise an error on np.inf, np.nan, pd.NA in X. The possibilities are: True: Force all values of X to be finite. False: accepts np.inf, np.nan, pd.NA in X. ‘allow-nan’: accepts only np.nan and pd.NA values in X. Values cannot be infinite. New in version 0.20: force_all_finite accepts the string 'allow-nan'. Changed in version 0.23: Accepts pd.NA and converts it into np.nan Returns
XT{ndarray, sparse matrix}
An array of type float. | sklearn.modules.generated.sklearn.utils.as_float_array |
sklearn.utils.Bunch
sklearn.utils.Bunch(**kwargs) [source]
Container object exposing keys as attributes. Bunch objects are sometimes used as an output for functions and methods. They extend dictionaries by enabling values to be accessed by key, bunch["value_key"], or by an attribute, bunch.value_key. Examples >>> b = Bunch(a=1, b=2)
>>> b['b']
2
>>> b.b
2
>>> b.a = 3
>>> b['a']
3
>>> b.c = 6
>>> b['c']
6
Examples using sklearn.utils.Bunch
Species distribution modeling | sklearn.modules.generated.sklearn.utils.bunch |
sklearn.utils.check_array
sklearn.utils.check_array(array, accept_sparse=False, *, accept_large_sparse=True, dtype='numeric', order=None, copy=False, force_all_finite=True, ensure_2d=True, allow_nd=False, ensure_min_samples=1, ensure_min_features=1, estimator=None) [source]
Input validation on an array, list, sparse matrix or similar. By default, the input is checked to be a non-empty 2D array containing only finite values. If the dtype of the array is object, attempt converting to float, raising on failure. Parameters
arrayobject
Input object to check / convert.
accept_sparsestr, bool or list/tuple of str, default=False
String[s] representing allowed sparse matrix formats, such as ‘csc’, ‘csr’, etc. If the input is sparse but not in the allowed format, it will be converted to the first listed format. True allows the input to be any format. False means that a sparse matrix input will raise an error.
accept_large_sparsebool, default=True
If a CSR, CSC, COO or BSR sparse matrix is supplied and accepted by accept_sparse, accept_large_sparse=False will cause it to be accepted only if its indices are stored with a 32-bit dtype. New in version 0.20.
dtype‘numeric’, type, list of type or None, default=’numeric’
Data type of result. If None, the dtype of the input is preserved. If “numeric”, dtype is preserved unless array.dtype is object. If dtype is a list of types, conversion on the first type is only performed if the dtype of the input is not in the list.
order{‘F’, ‘C’} or None, default=None
Whether an array will be forced to be fortran or c-style. When order is None (default), then if copy=False, nothing is ensured about the memory layout of the output array; otherwise (copy=True) the memory layout of the returned array is kept as close as possible to the original array.
copybool, default=False
Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion.
force_all_finitebool or ‘allow-nan’, default=True
Whether to raise an error on np.inf, np.nan, pd.NA in array. The possibilities are: True: Force all values of array to be finite. False: accepts np.inf, np.nan, pd.NA in array. ‘allow-nan’: accepts only np.nan and pd.NA values in array. Values cannot be infinite. New in version 0.20: force_all_finite accepts the string 'allow-nan'. Changed in version 0.23: Accepts pd.NA and converts it into np.nan
ensure_2dbool, default=True
Whether to raise a value error if array is not 2D.
allow_ndbool, default=False
Whether to allow array.ndim > 2.
ensure_min_samplesint, default=1
Make sure that the array has a minimum number of samples in its first axis (rows for a 2D array). Setting to 0 disables this check.
ensure_min_featuresint, default=1
Make sure that the 2D array has some minimum number of features (columns). The default value of 1 rejects empty datasets. This check is only enforced when the input data has effectively 2 dimensions or is originally 1D and ensure_2d is True. Setting to 0 disables this check.
estimatorstr or estimator instance, default=None
If passed, include the name of the estimator in warning messages. Returns
array_convertedobject
The converted and validated array. | sklearn.modules.generated.sklearn.utils.check_array |
sklearn.utils.check_consistent_length
sklearn.utils.check_consistent_length(*arrays) [source]
Check that all arrays have consistent first dimensions. Checks whether all objects in arrays have the same shape or length. Parameters
*arrayslist or tuple of input objects.
Objects that will be checked for consistent length. | sklearn.modules.generated.sklearn.utils.check_consistent_length |
sklearn.utils.check_random_state
sklearn.utils.check_random_state(seed) [source]
Turn seed into a np.random.RandomState instance Parameters
seedNone, int or instance of RandomState
If seed is None, return the RandomState singleton used by np.random. If seed is an int, return a new RandomState instance seeded with seed. If seed is already a RandomState instance, return it. Otherwise raise ValueError.
Examples using sklearn.utils.check_random_state
Empirical evaluation of the impact of k-means initialization
MNIST classification using multinomial logistic + L1
Manifold Learning methods on a severed sphere
Isotonic Regression
Face completion with a multi-output estimators
Scaling the regularization parameter for SVCs | sklearn.modules.generated.sklearn.utils.check_random_state |
sklearn.utils.check_scalar
sklearn.utils.check_scalar(x, name, target_type, *, min_val=None, max_val=None) [source]
Validate scalar parameters type and value. Parameters
xobject
The scalar parameter to validate.
namestr
The name of the parameter to be printed in error messages.
target_typetype or tuple
Acceptable data types for the parameter.
min_valfloat or int, default=None
The minimum valid value the parameter can take. If None (default) it is implied that the parameter does not have a lower bound.
max_valfloat or int, default=None
The maximum valid value the parameter can take. If None (default) it is implied that the parameter does not have an upper bound. Raises
TypeError
If the parameter’s type does not match the desired type. ValueError
If the parameter’s value violates the given bounds. | sklearn.modules.generated.sklearn.utils.check_scalar |
sklearn.utils.check_X_y
sklearn.utils.check_X_y(X, y, accept_sparse=False, *, accept_large_sparse=True, dtype='numeric', order=None, copy=False, force_all_finite=True, ensure_2d=True, allow_nd=False, multi_output=False, ensure_min_samples=1, ensure_min_features=1, y_numeric=False, estimator=None) [source]
Input validation for standard estimators. Checks X and y for consistent length, enforces X to be 2D and y 1D. By default, X is checked to be non-empty and containing only finite values. Standard input checks are also applied to y, such as checking that y does not have np.nan or np.inf targets. For multi-label y, set multi_output=True to allow 2D and sparse y. If the dtype of X is object, attempt converting to float, raising on failure. Parameters
X{ndarray, list, sparse matrix}
Input data.
y{ndarray, list, sparse matrix}
Labels.
accept_sparsestr, bool or list of str, default=False
String[s] representing allowed sparse matrix formats, such as ‘csc’, ‘csr’, etc. If the input is sparse but not in the allowed format, it will be converted to the first listed format. True allows the input to be any format. False means that a sparse matrix input will raise an error.
accept_large_sparsebool, default=True
If a CSR, CSC, COO or BSR sparse matrix is supplied and accepted by accept_sparse, accept_large_sparse will cause it to be accepted only if its indices are stored with a 32-bit dtype. New in version 0.20.
dtype‘numeric’, type, list of type or None, default=’numeric’
Data type of result. If None, the dtype of the input is preserved. If “numeric”, dtype is preserved unless array.dtype is object. If dtype is a list of types, conversion on the first type is only performed if the dtype of the input is not in the list.
order{‘F’, ‘C’}, default=None
Whether an array will be forced to be fortran or c-style.
copybool, default=False
Whether a forced copy will be triggered. If copy=False, a copy might be triggered by a conversion.
force_all_finitebool or ‘allow-nan’, default=True
Whether to raise an error on np.inf, np.nan, pd.NA in X. This parameter does not influence whether y can have np.inf, np.nan, pd.NA values. The possibilities are: True: Force all values of X to be finite. False: accepts np.inf, np.nan, pd.NA in X. ‘allow-nan’: accepts only np.nan or pd.NA values in X. Values cannot be infinite. New in version 0.20: force_all_finite accepts the string 'allow-nan'. Changed in version 0.23: Accepts pd.NA and converts it into np.nan
ensure_2dbool, default=True
Whether to raise a value error if X is not 2D.
allow_ndbool, default=False
Whether to allow X.ndim > 2.
multi_outputbool, default=False
Whether to allow 2D y (array or sparse matrix). If false, y will be validated as a vector. y cannot have np.nan or np.inf values if multi_output=True.
ensure_min_samplesint, default=1
Make sure that X has a minimum number of samples in its first axis (rows for a 2D array).
ensure_min_featuresint, default=1
Make sure that the 2D array has some minimum number of features (columns). The default value of 1 rejects empty datasets. This check is only enforced when X has effectively 2 dimensions or is originally 1D and ensure_2d is True. Setting to 0 disables this check.
y_numericbool, default=False
Whether to ensure that y has a numeric type. If dtype of y is object, it is converted to float64. Should only be used for regression algorithms.
estimatorstr or estimator instance, default=None
If passed, include the name of the estimator in warning messages. Returns
X_convertedobject
The converted and validated X.
y_convertedobject
The converted and validated y. | sklearn.modules.generated.sklearn.utils.check_x_y |
sklearn.utils.class_weight.compute_class_weight
sklearn.utils.class_weight.compute_class_weight(class_weight, *, classes, y) [source]
Estimate class weights for unbalanced datasets. Parameters
class_weightdict, ‘balanced’ or None
If ‘balanced’, class weights will be given by n_samples / (n_classes * np.bincount(y)). If a dictionary is given, keys are classes and values are corresponding class weights. If None is given, the class weights will be uniform.
classesndarray
Array of the classes occurring in the data, as given by np.unique(y_org) with y_org the original class labels.
yarray-like of shape (n_samples,)
Array of original class labels per sample. Returns
class_weight_vectndarray of shape (n_classes,)
Array with class_weight_vect[i] the weight for i-th class. References The “balanced” heuristic is inspired by Logistic Regression in Rare Events Data, King, Zen, 2001. | sklearn.modules.generated.sklearn.utils.class_weight.compute_class_weight |
sklearn.utils.class_weight.compute_sample_weight
sklearn.utils.class_weight.compute_sample_weight(class_weight, y, *, indices=None) [source]
Estimate sample weights by class for unbalanced datasets. Parameters
class_weightdict, list of dicts, “balanced”, or None
Weights associated with classes in the form {class_label: weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y. Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}]. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data: n_samples / (n_classes * np.bincount(y)). For multi-output, the weights of each column of y will be multiplied.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
Array of original class labels per sample.
indicesarray-like of shape (n_subsample,), default=None
Array of indices to be used in a subsample. Can be of length less than n_samples in the case of a subsample, or equal to n_samples in the case of a bootstrap subsample with repeated indices. If None, the sample weight will be calculated over the full sample. Only “balanced” is supported for class_weight if this is provided. Returns
sample_weight_vectndarray of shape (n_samples,)
Array with sample weights as applied to the original y. | sklearn.modules.generated.sklearn.utils.class_weight.compute_sample_weight |
sklearn.utils.deprecated
sklearn.utils.deprecated(extra='') [source]
Decorator to mark a function or class as deprecated. Issue a warning when the function is called/the class is instantiated and adds a warning to the docstring. The optional extra argument will be appended to the deprecation message and the docstring. Note: to use this with the default value for extra, put in an empty of parentheses: >>> from sklearn.utils import deprecated
>>> deprecated()
<sklearn.utils.deprecation.deprecated object at ...>
>>> @deprecated()
... def some_function(): pass
Parameters
extrastr, default=’’
To be added to the deprecation messages. | sklearn.modules.generated.sklearn.utils.deprecated |
sklearn.utils.estimator_checks.check_estimator
sklearn.utils.estimator_checks.check_estimator(Estimator, generate_only=False) [source]
Check if estimator adheres to scikit-learn conventions. This estimator will run an extensive test-suite for input validation, shapes, etc, making sure that the estimator complies with scikit-learn conventions as detailed in Rolling your own estimator. Additional tests for classifiers, regressors, clustering or transformers will be run if the Estimator class inherits from the corresponding mixin from sklearn.base. Setting generate_only=True returns a generator that yields (estimator, check) tuples where the check can be called independently from each other, i.e. check(estimator). This allows all checks to be run independently and report the checks that are failing. scikit-learn provides a pytest specific decorator, parametrize_with_checks, making it easier to test multiple estimators. Parameters
Estimatorestimator object
Estimator instance to check. Changed in version 0.24: Passing a class was deprecated in version 0.23, and support for classes was removed in 0.24.
generate_onlybool, default=False
When False, checks are evaluated when check_estimator is called. When True, check_estimator returns a generator that yields (estimator, check) tuples. The check is run by calling check(estimator). New in version 0.22. Returns
checks_generatorgenerator
Generator that yields (estimator, check) tuples. Returned when generate_only=True. | sklearn.modules.generated.sklearn.utils.estimator_checks.check_estimator |
sklearn.utils.estimator_checks.parametrize_with_checks
sklearn.utils.estimator_checks.parametrize_with_checks(estimators) [source]
Pytest specific decorator for parametrizing estimator checks. The id of each check is set to be a pprint version of the estimator and the name of the check with its keyword arguments. This allows to use pytest -k to specify which tests to run: pytest test_check_estimators.py -k check_estimators_fit_returns_self
Parameters
estimatorslist of estimators instances
Estimators to generated checks for. Changed in version 0.24: Passing a class was deprecated in version 0.23, and support for classes was removed in 0.24. Pass an instance instead. New in version 0.24. Returns
decoratorpytest.mark.parametrize
Examples >>> from sklearn.utils.estimator_checks import parametrize_with_checks
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.tree import DecisionTreeRegressor
>>> @parametrize_with_checks([LogisticRegression(),
... DecisionTreeRegressor()])
... def test_sklearn_compatible_estimator(estimator, check):
... check(estimator)
Examples using sklearn.utils.estimator_checks.parametrize_with_checks
Release Highlights for scikit-learn 0.22 | sklearn.modules.generated.sklearn.utils.estimator_checks.parametrize_with_checks |
sklearn.utils.estimator_html_repr
sklearn.utils.estimator_html_repr(estimator) [source]
Build a HTML representation of an estimator. Read more in the User Guide. Parameters
estimatorestimator object
The estimator to visualize. Returns
html: str
HTML representation of estimator. | sklearn.modules.generated.sklearn.utils.estimator_html_repr |
sklearn.utils.extmath.density
sklearn.utils.extmath.density(w, **kwargs) [source]
Compute density of a sparse vector. Parameters
warray-like
The sparse vector. Returns
float
The density of w, between 0 and 1.
Examples using sklearn.utils.extmath.density
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.utils.extmath.density |
sklearn.utils.extmath.fast_logdet
sklearn.utils.extmath.fast_logdet(A) [source]
Compute log(det(A)) for A symmetric. Equivalent to : np.log(nl.det(A)) but more robust. It returns -Inf if det(A) is non positive or is not defined. Parameters
Aarray-like
The matrix. | sklearn.modules.generated.sklearn.utils.extmath.fast_logdet |
sklearn.utils.extmath.randomized_range_finder
sklearn.utils.extmath.randomized_range_finder(A, *, size, n_iter, power_iteration_normalizer='auto', random_state=None) [source]
Computes an orthonormal matrix whose range approximates the range of A. Parameters
A2D array
The input data matrix.
sizeint
Size of the return array.
n_iterint
Number of power iterations used to stabilize the result.
power_iteration_normalizer{‘auto’, ‘QR’, ‘LU’, ‘none’}, default=’auto’
Whether the power iterations are normalized with step-by-step QR factorization (the slowest but most accurate), ‘none’ (the fastest but numerically unstable when n_iter is large, e.g. typically 5 or larger), or ‘LU’ factorization (numerically stable but can lose slightly in accuracy). The ‘auto’ mode applies no normalization if n_iter <= 2 and switches to LU otherwise. New in version 0.18.
random_stateint, RandomState instance or None, default=None
The seed of the pseudo random number generator to use when shuffling the data, i.e. getting the random vectors to initialize the algorithm. Pass an int for reproducible results across multiple function calls. See Glossary. Returns
Qndarray
A (size x size) projection matrix, the range of which approximates well the range of the input matrix A. Notes Follows Algorithm 4.3 of Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions Halko, et al., 2009 (arXiv:909) https://arxiv.org/pdf/0909.4061.pdf An implementation of a randomized algorithm for principal component analysis A. Szlam et al. 2014 | sklearn.modules.generated.sklearn.utils.extmath.randomized_range_finder |
sklearn.utils.extmath.randomized_svd
sklearn.utils.extmath.randomized_svd(M, n_components, *, n_oversamples=10, n_iter='auto', power_iteration_normalizer='auto', transpose='auto', flip_sign=True, random_state=0) [source]
Computes a truncated randomized SVD. Parameters
M{ndarray, sparse matrix}
Matrix to decompose.
n_componentsint
Number of singular values and vectors to extract.
n_oversamplesint, default=10
Additional number of random vectors to sample the range of M so as to ensure proper conditioning. The total number of random vectors used to find the range of M is n_components + n_oversamples. Smaller number can improve speed but can negatively impact the quality of approximation of singular vectors and singular values.
n_iterint or ‘auto’, default=’auto’
Number of power iterations. It can be used to deal with very noisy problems. When ‘auto’, it is set to 4, unless n_components is small (< .1 * min(X.shape)) n_iter in which case is set to 7. This improves precision with few components. Changed in version 0.18.
power_iteration_normalizer{‘auto’, ‘QR’, ‘LU’, ‘none’}, default=’auto’
Whether the power iterations are normalized with step-by-step QR factorization (the slowest but most accurate), ‘none’ (the fastest but numerically unstable when n_iter is large, e.g. typically 5 or larger), or ‘LU’ factorization (numerically stable but can lose slightly in accuracy). The ‘auto’ mode applies no normalization if n_iter <= 2 and switches to LU otherwise. New in version 0.18.
transposebool or ‘auto’, default=’auto’
Whether the algorithm should be applied to M.T instead of M. The result should approximately be the same. The ‘auto’ mode will trigger the transposition if M.shape[1] > M.shape[0] since this implementation of randomized SVD tend to be a little faster in that case. Changed in version 0.18.
flip_signbool, default=True
The output of a singular value decomposition is only unique up to a permutation of the signs of the singular vectors. If flip_sign is set to True, the sign ambiguity is resolved by making the largest loadings for each component in the left singular vectors positive.
random_stateint, RandomState instance or None, default=0
The seed of the pseudo random number generator to use when shuffling the data, i.e. getting the random vectors to initialize the algorithm. Pass an int for reproducible results across multiple function calls. See Glossary. Notes This algorithm finds a (usually very good) approximate truncated singular value decomposition using randomization to speed up the computations. It is particularly fast on large matrices on which you wish to extract only a small number of components. In order to obtain further speed up, n_iter can be set <=2 (at the cost of loss of precision). References Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions Halko, et al., 2009 https://arxiv.org/abs/0909.4061
A randomized algorithm for the decomposition of matrices Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert An implementation of a randomized algorithm for principal component analysis A. Szlam et al. 2014 | sklearn.modules.generated.sklearn.utils.extmath.randomized_svd |
sklearn.utils.extmath.safe_sparse_dot
sklearn.utils.extmath.safe_sparse_dot(a, b, *, dense_output=False) [source]
Dot product that handle the sparse matrix case correctly. Parameters
a{ndarray, sparse matrix}
b{ndarray, sparse matrix}
dense_outputbool, default=False
When False, a and b both being sparse will yield sparse output. When True, output will always be a dense array. Returns
dot_product{ndarray, sparse matrix}
Sparse if a and b are sparse and dense_output=False. | sklearn.modules.generated.sklearn.utils.extmath.safe_sparse_dot |
sklearn.utils.extmath.weighted_mode
sklearn.utils.extmath.weighted_mode(a, w, *, axis=0) [source]
Returns an array of the weighted modal (most common) value in a. If there is more than one such value, only the first is returned. The bin-count for the modal bins is also returned. This is an extension of the algorithm in scipy.stats.mode. Parameters
aarray-like
n-dimensional array of which to find mode(s).
warray-like
n-dimensional array of weights for each value.
axisint, default=0
Axis along which to operate. Default is 0, i.e. the first axis. Returns
valsndarray
Array of modal values.
scorendarray
Array of weighted counts for each mode. See also
scipy.stats.mode
Examples >>> from sklearn.utils.extmath import weighted_mode
>>> x = [4, 1, 4, 2, 4, 2]
>>> weights = [1, 1, 1, 1, 1, 1]
>>> weighted_mode(x, weights)
(array([4.]), array([3.]))
The value 4 appears three times: with uniform weights, the result is simply the mode of the distribution. >>> weights = [1, 3, 0.5, 1.5, 1, 2] # deweight the 4's
>>> weighted_mode(x, weights)
(array([2.]), array([3.5]))
The value 2 has the highest score: it appears twice with weights of 1.5 and 2: the sum of these is 3.5. | sklearn.modules.generated.sklearn.utils.extmath.weighted_mode |
sklearn.utils.gen_even_slices
sklearn.utils.gen_even_slices(n, n_packs, *, n_samples=None) [source]
Generator to create n_packs slices going up to n. Parameters
nint
n_packsint
Number of slices to generate.
n_samplesint, default=None
Number of samples. Pass n_samples when the slices are to be used for sparse matrix indexing; slicing off-the-end raises an exception, while it works for NumPy arrays. Yields
slice
Examples >>> from sklearn.utils import gen_even_slices
>>> list(gen_even_slices(10, 1))
[slice(0, 10, None)]
>>> list(gen_even_slices(10, 10))
[slice(0, 1, None), slice(1, 2, None), ..., slice(9, 10, None)]
>>> list(gen_even_slices(10, 5))
[slice(0, 2, None), slice(2, 4, None), ..., slice(8, 10, None)]
>>> list(gen_even_slices(10, 3))
[slice(0, 4, None), slice(4, 7, None), slice(7, 10, None)]
Examples using sklearn.utils.gen_even_slices
Poisson regression and non-normal loss | sklearn.modules.generated.sklearn.utils.gen_even_slices |
sklearn.utils.graph.single_source_shortest_path_length
sklearn.utils.graph.single_source_shortest_path_length(graph, source, *, cutoff=None) [source]
Return the shortest path length from source to all reachable nodes. Returns a dictionary of shortest path lengths keyed by target. Parameters
graph{sparse matrix, ndarray} of shape (n, n)
Adjacency matrix of the graph. Sparse matrix of format LIL is preferred.
sourceint
Starting node for path.
cutoffint, default=None
Depth to stop the search - only paths of length <= cutoff are returned. Examples >>> from sklearn.utils.graph import single_source_shortest_path_length
>>> import numpy as np
>>> graph = np.array([[ 0, 1, 0, 0],
... [ 1, 0, 1, 0],
... [ 0, 1, 0, 1],
... [ 0, 0, 1, 0]])
>>> list(sorted(single_source_shortest_path_length(graph, 0).items()))
[(0, 0), (1, 1), (2, 2), (3, 3)]
>>> graph = np.ones((6, 6))
>>> list(sorted(single_source_shortest_path_length(graph, 2).items()))
[(0, 1), (1, 1), (2, 0), (3, 1), (4, 1), (5, 1)] | sklearn.modules.generated.sklearn.utils.graph.single_source_shortest_path_length |
sklearn.utils.graph_shortest_path.graph_shortest_path
sklearn.utils.graph_shortest_path.graph_shortest_path()
Perform a shortest-path graph search on a positive directed or undirected graph. Parameters
dist_matrixarraylike or sparse matrix, shape = (N,N)
Array of positive distances. If vertex i is connected to vertex j, then dist_matrix[i,j] gives the distance between the vertices. If vertex i is not connected to vertex j, then dist_matrix[i,j] = 0
directedboolean
if True, then find the shortest path on a directed graph: only progress from a point to its neighbors, not the other way around. if False, then find the shortest path on an undirected graph: the algorithm can progress from a point to its neighbors and vice versa.
methodstring [‘auto’|’FW’|’D’]
method to use. Options are ‘auto’ : attempt to choose the best method for the current problem ‘FW’ : Floyd-Warshall algorithm. O[N^3] ‘D’ : Dijkstra’s algorithm with Fibonacci stacks. O[(k+log(N))N^2] Returns
Gnp.ndarray, float, shape = [N,N]
G[i,j] gives the shortest distance from point i to point j along the graph. Notes As currently implemented, Dijkstra’s algorithm does not work for graphs with direction-dependent distances when directed == False. i.e., if dist_matrix[i,j] and dist_matrix[j,i] are not equal and both are nonzero, method=’D’ will not necessarily yield the correct result. Also, these routines have not been tested for graphs with negative distances. Negative distances can lead to infinite cycles that must be handled by specialized algorithms. | sklearn.modules.generated.sklearn.utils.graph_shortest_path.graph_shortest_path |
sklearn.utils.indexable
sklearn.utils.indexable(*iterables) [source]
Make arrays indexable for cross-validation. Checks consistent length, passes through None, and ensures that everything can be indexed by converting sparse matrices to csr and converting non-interable objects to arrays. Parameters
*iterables{lists, dataframes, ndarrays, sparse matrices}
List of objects to ensure sliceability. | sklearn.modules.generated.sklearn.utils.indexable |
sklearn.utils.metaestimators.if_delegate_has_method
sklearn.utils.metaestimators.if_delegate_has_method(delegate) [source]
Create a decorator for methods that are delegated to a sub-estimator This enables ducktyping by hasattr returning True according to the sub-estimator. Parameters
delegatestring, list of strings or tuple of strings
Name of the sub-estimator that can be accessed as an attribute of the base object. If a list or a tuple of names are provided, the first sub-estimator that is an attribute of the base object will be used.
Examples using sklearn.utils.metaestimators.if_delegate_has_method
Inductive Clustering | sklearn.modules.generated.sklearn.utils.metaestimators.if_delegate_has_method |
sklearn.utils.multiclass.is_multilabel
sklearn.utils.multiclass.is_multilabel(y) [source]
Check if y is in a multilabel format. Parameters
yndarray of shape (n_samples,)
Target values. Returns
outbool
Return True, if y is in a multilabel format, else `False. Examples >>> import numpy as np
>>> from sklearn.utils.multiclass import is_multilabel
>>> is_multilabel([0, 1, 0, 1])
False
>>> is_multilabel([[1], [0, 2], []])
False
>>> is_multilabel(np.array([[1, 0], [0, 0]]))
True
>>> is_multilabel(np.array([[1], [0], [0]]))
False
>>> is_multilabel(np.array([[1, 0, 0]]))
True | sklearn.modules.generated.sklearn.utils.multiclass.is_multilabel |
sklearn.utils.multiclass.type_of_target
sklearn.utils.multiclass.type_of_target(y) [source]
Determine the type of data indicated by the target. Note that this type is the most specific type that can be inferred. For example:
binary is more specific but compatible with multiclass.
multiclass of integers is more specific but compatible with continuous.
multilabel-indicator is more specific but compatible with multiclass-multioutput. Parameters
yarray-like
Returns
target_typestr
One of: ‘continuous’: y is an array-like of floats that are not all integers, and is 1d or a column vector. ‘continuous-multioutput’: y is a 2d array of floats that are not all integers, and both dimensions are of size > 1. ‘binary’: y contains <= 2 discrete values and is 1d or a column vector. ‘multiclass’: y contains more than two discrete values, is not a sequence of sequences, and is 1d or a column vector. ‘multiclass-multioutput’: y is a 2d array that contains more than two discrete values, is not a sequence of sequences, and both dimensions are of size > 1. ‘multilabel-indicator’: y is a label indicator matrix, an array of two dimensions with at least two columns, and at most 2 unique values. ‘unknown’: y is array-like but none of the above, such as a 3d array, sequence of sequences, or an array of non-sequence objects. Examples >>> import numpy as np
>>> type_of_target([0.1, 0.6])
'continuous'
>>> type_of_target([1, -1, -1, 1])
'binary'
>>> type_of_target(['a', 'b', 'a'])
'binary'
>>> type_of_target([1.0, 2.0])
'binary'
>>> type_of_target([1, 0, 2])
'multiclass'
>>> type_of_target([1.0, 0.0, 3.0])
'multiclass'
>>> type_of_target(['a', 'b', 'c'])
'multiclass'
>>> type_of_target(np.array([[1, 2], [3, 1]]))
'multiclass-multioutput'
>>> type_of_target([[1, 2]])
'multilabel-indicator'
>>> type_of_target(np.array([[1.5, 2.0], [3.0, 1.6]]))
'continuous-multioutput'
>>> type_of_target(np.array([[0, 1], [1, 1]]))
'multilabel-indicator' | sklearn.modules.generated.sklearn.utils.multiclass.type_of_target |
sklearn.utils.multiclass.unique_labels
sklearn.utils.multiclass.unique_labels(*ys) [source]
Extract an ordered array of unique labels. We don’t allow:
mix of multilabel and multiclass (single label) targets mix of label indicator matrix and anything else, because there are no explicit labels) mix of label indicator matrices of different sizes mix of string and integer labels At the moment, we also don’t allow “multiclass-multioutput” input type. Parameters
*ysarray-likes
Returns
outndarray of shape (n_unique_labels,)
An ordered array of unique labels. Examples >>> from sklearn.utils.multiclass import unique_labels
>>> unique_labels([3, 5, 5, 5, 7, 7])
array([3, 5, 7])
>>> unique_labels([1, 2, 3, 4], [2, 2, 3, 4])
array([1, 2, 3, 4])
>>> unique_labels([1, 2, 10], [5, 11])
array([ 1, 2, 5, 10, 11]) | sklearn.modules.generated.sklearn.utils.multiclass.unique_labels |
sklearn.utils.murmurhash3_32
sklearn.utils.murmurhash3_32()
Compute the 32bit murmurhash3 of key at seed. The underlying implementation is MurmurHash3_x86_32 generating low latency 32bits hash suitable for implementing lookup tables, Bloom filters, count min sketch or feature hashing. Parameters
keynp.int32, bytes, unicode or ndarray of dtype=np.int32
The physical object to hash.
seedint, default=0
Integer seed for the hashing algorithm.
positivebool, default=False
True: the results is casted to an unsigned int
from 0 to 2 ** 32 - 1 False: the results is casted to a signed int
from -(2 ** 31) to 2 ** 31 - 1 | sklearn.modules.generated.sklearn.utils.murmurhash3_32 |
sklearn.utils.parallel_backend
sklearn.utils.parallel_backend(backend, n_jobs=- 1, inner_max_num_threads=None, **backend_params) [source]
Change the default backend used by Parallel inside a with block. If backend is a string it must match a previously registered implementation using the register_parallel_backend function. By default the following backends are available: ‘loky’: single-host, process-based parallelism (used by default), ‘threading’: single-host, thread-based parallelism, ‘multiprocessing’: legacy single-host, process-based parallelism. ‘loky’ is recommended to run functions that manipulate Python objects. ‘threading’ is a low-overhead alternative that is most efficient for functions that release the Global Interpreter Lock: e.g. I/O-bound code or CPU-bound code in a few calls to native code that explicitly releases the GIL. In addition, if the dask and distributed Python packages are installed, it is possible to use the ‘dask’ backend for better scheduling of nested parallel calls without over-subscription and potentially distribute parallel calls over a networked cluster of several hosts. It is also possible to use the distributed ‘ray’ backend for distributing the workload to a cluster of nodes. To use the ‘ray’ joblib backend add the following lines: >>> from ray.util.joblib import register_ray
>>> register_ray()
>>> with parallel_backend("ray"):
... print(Parallel()(delayed(neg)(i + 1) for i in range(5)))
[-1, -2, -3, -4, -5]
Alternatively the backend can be passed directly as an instance. By default all available workers will be used (n_jobs=-1) unless the caller passes an explicit value for the n_jobs parameter. This is an alternative to passing a backend='backend_name' argument to the Parallel class constructor. It is particularly useful when calling into library code that uses joblib internally but does not expose the backend argument in its own API. >>> from operator import neg
>>> with parallel_backend('threading'):
... print(Parallel()(delayed(neg)(i + 1) for i in range(5)))
...
[-1, -2, -3, -4, -5]
Warning: this function is experimental and subject to change in a future version of joblib. Joblib also tries to limit the oversubscription by limiting the number of threads usable in some third-party library threadpools like OpenBLAS, MKL or OpenMP. The default limit in each worker is set to max(cpu_count() // effective_n_jobs, 1) but this limit can be overwritten with the inner_max_num_threads argument which will be used to set this limit in the child processes. New in version 0.10. | sklearn.modules.generated.sklearn.utils.parallel_backend |
sklearn.utils.random.sample_without_replacement
sklearn.utils.random.sample_without_replacement()
Sample integers without replacement. Select n_samples integers from the set [0, n_population) without replacement. Parameters
n_populationint
The size of the set to sample from.
n_samplesint
The number of integer to sample.
random_stateint, RandomState instance or None, default=None
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
method{“auto”, “tracking_selection”, “reservoir_sampling”, “pool”}, default=’auto’
If method == “auto”, the ratio of n_samples / n_population is used to determine which algorithm to use: If ratio is between 0 and 0.01, tracking selection is used. If ratio is between 0.01 and 0.99, numpy.random.permutation is used. If ratio is greater than 0.99, reservoir sampling is used. The order of the selected integers is undefined. If a random order is desired, the selected subset should be shuffled. If method ==”tracking_selection”, a set based implementation is used which is suitable for n_samples <<< n_population. If method == “reservoir_sampling”, a reservoir sampling algorithm is used which is suitable for high memory constraint or when O(n_samples) ~ O(n_population). The order of the selected integers is undefined. If a random order is desired, the selected subset should be shuffled. If method == “pool”, a pool based algorithm is particularly fast, even faster than the tracking selection method. However, a vector containing the entire population has to be initialized. If n_samples ~ n_population, the reservoir sampling method is faster. Returns
outndarray of shape (n_samples,)
The sampled subsets of integer. The subset of selected integer might not be randomized, see the method argument. | sklearn.modules.generated.sklearn.utils.random.sample_without_replacement |
sklearn.utils.register_parallel_backend
sklearn.utils.register_parallel_backend(name, factory, make_default=False) [source]
Register a new Parallel backend factory. The new backend can then be selected by passing its name as the backend argument to the Parallel class. Moreover, the default backend can be overwritten globally by setting make_default=True. The factory can be any callable that takes no argument and return an instance of ParallelBackendBase. Warning: this function is experimental and subject to change in a future version of joblib. New in version 0.10. | sklearn.modules.generated.sklearn.utils.register_parallel_backend |
sklearn.utils.resample
sklearn.utils.resample(*arrays, replace=True, n_samples=None, random_state=None, stratify=None) [source]
Resample arrays or sparse matrices in a consistent way. The default strategy implements one step of the bootstrapping procedure. Parameters
*arrayssequence of array-like of shape (n_samples,) or (n_samples, n_outputs)
Indexable data-structures can be arrays, lists, dataframes or scipy sparse matrices with consistent first dimension.
replacebool, default=True
Implements resampling with replacement. If False, this will implement (sliced) random permutations.
n_samplesint, default=None
Number of samples to generate. If left to None this is automatically set to the first dimension of the arrays. If replace is False it should not be larger than the length of arrays.
random_stateint, RandomState instance or None, default=None
Determines random number generation for shuffling the data. Pass an int for reproducible results across multiple function calls. See Glossary.
stratifyarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None
If not None, data is split in a stratified fashion, using this as the class labels. Returns
resampled_arrayssequence of array-like of shape (n_samples,) or (n_samples, n_outputs)
Sequence of resampled copies of the collections. The original arrays are not impacted. See also
shuffle
Examples It is possible to mix sparse and dense arrays in the same run: >>> X = np.array([[1., 0.], [2., 1.], [0., 0.]])
>>> y = np.array([0, 1, 2])
>>> from scipy.sparse import coo_matrix
>>> X_sparse = coo_matrix(X)
>>> from sklearn.utils import resample
>>> X, X_sparse, y = resample(X, X_sparse, y, random_state=0)
>>> X
array([[1., 0.],
[2., 1.],
[1., 0.]])
>>> X_sparse
<3x2 sparse matrix of type '<... 'numpy.float64'>'
with 4 stored elements in Compressed Sparse Row format>
>>> X_sparse.toarray()
array([[1., 0.],
[2., 1.],
[1., 0.]])
>>> y
array([0, 1, 0])
>>> resample(y, n_samples=2, random_state=0)
array([0, 1])
Example using stratification: >>> y = [0, 0, 1, 1, 1, 1, 1, 1, 1]
>>> resample(y, n_samples=5, replace=False, stratify=y,
... random_state=0)
[1, 1, 1, 0, 1] | sklearn.modules.generated.sklearn.utils.resample |
sklearn.utils.safe_mask
sklearn.utils.safe_mask(X, mask) [source]
Return a mask which is safe to use on X. Parameters
X{array-like, sparse matrix}
Data on which to apply mask.
maskndarray
Mask to be used on X. Returns
mask | sklearn.modules.generated.sklearn.utils.safe_mask |
sklearn.utils.safe_sqr
sklearn.utils.safe_sqr(X, *, copy=True) [source]
Element wise squaring of array-likes and sparse matrices. Parameters
X{array-like, ndarray, sparse matrix}
copybool, default=True
Whether to create a copy of X and operate on it or to perform inplace computation (default behaviour). Returns
X ** 2element wise square | sklearn.modules.generated.sklearn.utils.safe_sqr |
sklearn.utils.shuffle
sklearn.utils.shuffle(*arrays, random_state=None, n_samples=None) [source]
Shuffle arrays or sparse matrices in a consistent way. This is a convenience alias to resample(*arrays, replace=False) to do random permutations of the collections. Parameters
*arrayssequence of indexable data-structures
Indexable data-structures can be arrays, lists, dataframes or scipy sparse matrices with consistent first dimension.
random_stateint, RandomState instance or None, default=None
Determines random number generation for shuffling the data. Pass an int for reproducible results across multiple function calls. See Glossary.
n_samplesint, default=None
Number of samples to generate. If left to None this is automatically set to the first dimension of the arrays. It should not be larger than the length of arrays. Returns
shuffled_arrayssequence of indexable data-structures
Sequence of shuffled copies of the collections. The original arrays are not impacted. See also
resample
Examples It is possible to mix sparse and dense arrays in the same run: >>> X = np.array([[1., 0.], [2., 1.], [0., 0.]])
>>> y = np.array([0, 1, 2])
>>> from scipy.sparse import coo_matrix
>>> X_sparse = coo_matrix(X)
>>> from sklearn.utils import shuffle
>>> X, X_sparse, y = shuffle(X, X_sparse, y, random_state=0)
>>> X
array([[0., 0.],
[2., 1.],
[1., 0.]])
>>> X_sparse
<3x2 sparse matrix of type '<... 'numpy.float64'>'
with 3 stored elements in Compressed Sparse Row format>
>>> X_sparse.toarray()
array([[0., 0.],
[2., 1.],
[1., 0.]])
>>> y
array([2, 1, 0])
>>> shuffle(y, n_samples=2, random_state=0)
array([0, 1])
Examples using sklearn.utils.shuffle
Color Quantization using K-Means
Empirical evaluation of the impact of k-means initialization
Combine predictors using stacking
Model Complexity Influence
Prediction Latency
Early stopping of Stochastic Gradient Descent
Approximate nearest neighbors in TSNE
Effect of varying threshold for self-training | sklearn.modules.generated.sklearn.utils.shuffle |
sklearn.utils.sparsefuncs.incr_mean_variance_axis
sklearn.utils.sparsefuncs.incr_mean_variance_axis(X, *, axis, last_mean, last_var, last_n, weights=None) [source]
Compute incremental mean and variance along an axis on a CSR or CSC matrix. last_mean, last_var are the statistics computed at the last step by this function. Both must be initialized to 0-arrays of the proper size, i.e. the number of features in X. last_n is the number of samples encountered until now. Parameters
XCSR or CSC sparse matrix of shape (n_samples, n_features)
Input data.
axis{0, 1}
Axis along which the axis should be computed.
last_meanndarray of shape (n_features,) or (n_samples,), dtype=floating
Array of means to update with the new data X. Should be of shape (n_features,) if axis=0 or (n_samples,) if axis=1.
last_varndarray of shape (n_features,) or (n_samples,), dtype=floating
Array of variances to update with the new data X. Should be of shape (n_features,) if axis=0 or (n_samples,) if axis=1.
last_nfloat or ndarray of shape (n_features,) or (n_samples,), dtype=floating
Sum of the weights seen so far, excluding the current weights If not float, it should be of shape (n_samples,) if axis=0 or (n_features,) if axis=1. If float it corresponds to having same weights for all samples (or features).
weightsndarray of shape (n_samples,) or (n_features,), default=None
If axis is set to 0 shape is (n_samples,) or if axis is set to 1 shape is (n_features,). If it is set to None, then samples are equally weighted. New in version 0.24. Returns
meansndarray of shape (n_features,) or (n_samples,), dtype=floating
Updated feature-wise means if axis = 0 or sample-wise means if axis = 1.
variancesndarray of shape (n_features,) or (n_samples,), dtype=floating
Updated feature-wise variances if axis = 0 or sample-wise variances if axis = 1.
nndarray of shape (n_features,) or (n_samples,), dtype=integral
Updated number of seen samples per feature if axis=0 or number of seen features per sample if axis=1. If weights is not None, n is a sum of the weights of the seen samples or features instead of the actual number of seen samples or features. Notes NaNs are ignored in the algorithm. | sklearn.modules.generated.sklearn.utils.sparsefuncs.incr_mean_variance_axis |
sklearn.utils.sparsefuncs.inplace_column_scale
sklearn.utils.sparsefuncs.inplace_column_scale(X, scale) [source]
Inplace column scaling of a CSC/CSR matrix. Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix to normalize using the variance of the features. It should be of CSC or CSR format.
scalendarray of shape (n_features,), dtype={np.float32, np.float64}
Array of precomputed feature-wise values to use for scaling. | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_column_scale |
sklearn.utils.sparsefuncs.inplace_csr_column_scale
sklearn.utils.sparsefuncs.inplace_csr_column_scale(X, scale) [source]
Inplace column scaling of a CSR matrix. Scale each feature of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix to normalize using the variance of the features. It should be of CSR format.
scalendarray of shape (n_features,), dtype={np.float32, np.float64}
Array of precomputed feature-wise values to use for scaling. | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_csr_column_scale |
sklearn.utils.sparsefuncs.inplace_row_scale
sklearn.utils.sparsefuncs.inplace_row_scale(X, scale) [source]
Inplace row scaling of a CSR or CSC matrix. Scale each row of the data matrix by multiplying with specific scale provided by the caller assuming a (n_samples, n_features) shape. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix to be scaled. It should be of CSR or CSC format.
scalendarray of shape (n_features,), dtype={np.float32, np.float64}
Array of precomputed sample-wise values to use for scaling. | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_row_scale |
sklearn.utils.sparsefuncs.inplace_swap_column
sklearn.utils.sparsefuncs.inplace_swap_column(X, m, n) [source]
Swaps two columns of a CSC/CSR matrix in-place. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix whose two columns are to be swapped. It should be of CSR or CSC format.
mint
Index of the column of X to be swapped.
nint
Index of the column of X to be swapped. | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_swap_column |
sklearn.utils.sparsefuncs.inplace_swap_row
sklearn.utils.sparsefuncs.inplace_swap_row(X, m, n) [source]
Swaps two rows of a CSC/CSR matrix in-place. Parameters
Xsparse matrix of shape (n_samples, n_features)
Matrix whose two rows are to be swapped. It should be of CSR or CSC format.
mint
Index of the row of X to be swapped.
nint
Index of the row of X to be swapped. | sklearn.modules.generated.sklearn.utils.sparsefuncs.inplace_swap_row |
sklearn.utils.sparsefuncs.mean_variance_axis
sklearn.utils.sparsefuncs.mean_variance_axis(X, axis, weights=None, return_sum_weights=False) [source]
Compute mean and variance along an axis on a CSR or CSC matrix. Parameters
Xsparse matrix of shape (n_samples, n_features)
Input data. It can be of CSR or CSC format.
axis{0, 1}
Axis along which the axis should be computed.
weightsndarray of shape (n_samples,) or (n_features,), default=None
if axis is set to 0 shape is (n_samples,) or if axis is set to 1 shape is (n_features,). If it is set to None, then samples are equally weighted. New in version 0.24.
return_sum_weightsbool, default=False
If True, returns the sum of weights seen for each feature if axis=0 or each sample if axis=1. New in version 0.24. Returns
meansndarray of shape (n_features,), dtype=floating
Feature-wise means.
variancesndarray of shape (n_features,), dtype=floating
Feature-wise variances.
sum_weightsndarray of shape (n_features,), dtype=floating
Returned if return_sum_weights is True. | sklearn.modules.generated.sklearn.utils.sparsefuncs.mean_variance_axis |
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1()
Inplace row normalize using the l1 norm | sklearn.modules.generated.sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l1 |
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2
sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2()
Inplace row normalize using the l2 norm | sklearn.modules.generated.sklearn.utils.sparsefuncs_fast.inplace_csr_row_normalize_l2 |
sklearn.utils.validation.check_is_fitted
sklearn.utils.validation.check_is_fitted(estimator, attributes=None, *, msg=None, all_or_any=<built-in function all>) [source]
Perform is_fitted validation for estimator. Checks if the estimator is fitted by verifying the presence of fitted attributes (ending with a trailing underscore) and otherwise raises a NotFittedError with the given message. This utility is meant to be used internally by estimators themselves, typically in their own predict / transform methods. Parameters
estimatorestimator instance
estimator instance for which the check is performed.
attributesstr, list or tuple of str, default=None
Attribute name(s) given as string or a list/tuple of strings Eg.: ["coef_", "estimator_", ...], "coef_" If None, estimator is considered fitted if there exist an attribute that ends with a underscore and does not start with double underscore.
msgstr, default=None
The default error message is, “This %(name)s instance is not fitted yet. Call ‘fit’ with appropriate arguments before using this estimator.” For custom messages if “%(name)s” is present in the message string, it is substituted for the estimator name. Eg. : “Estimator, %(name)s, must be fitted before sparsifying”.
all_or_anycallable, {all, any}, default=all
Specify whether all or any of the given attributes must exist. Returns
None
Raises
NotFittedError
If the attributes are not found. | sklearn.modules.generated.sklearn.utils.validation.check_is_fitted |
sklearn.utils.validation.check_memory
sklearn.utils.validation.check_memory(memory) [source]
Check that memory is joblib.Memory-like. joblib.Memory-like means that memory can be converted into a joblib.Memory instance (typically a str denoting the location) or has the same interface (has a cache method). Parameters
memoryNone, str or object with the joblib.Memory interface
Returns
memoryobject with the joblib.Memory interface
Raises
ValueError
If memory is not joblib.Memory-like. | sklearn.modules.generated.sklearn.utils.validation.check_memory |
sklearn.utils.validation.check_symmetric
sklearn.utils.validation.check_symmetric(array, *, tol=1e-10, raise_warning=True, raise_exception=False) [source]
Make sure that array is 2D, square and symmetric. If the array is not symmetric, then a symmetrized version is returned. Optionally, a warning or exception is raised if the matrix is not symmetric. Parameters
array{ndarray, sparse matrix}
Input object to check / convert. Must be two-dimensional and square, otherwise a ValueError will be raised.
tolfloat, default=1e-10
Absolute tolerance for equivalence of arrays. Default = 1E-10.
raise_warningbool, default=True
If True then raise a warning if conversion is required.
raise_exceptionbool, default=False
If True then raise an exception if array is not symmetric. Returns
array_sym{ndarray, sparse matrix}
Symmetrized version of the input array, i.e. the average of array and array.transpose(). If sparse, then duplicate entries are first summed and zeros are eliminated. | sklearn.modules.generated.sklearn.utils.validation.check_symmetric |
sklearn.utils.validation.column_or_1d
sklearn.utils.validation.column_or_1d(y, *, warn=False) [source]
Ravel column or 1d numpy array, else raises an error. Parameters
yarray-like
warnbool, default=False
To control display of warnings. Returns
yndarray | sklearn.modules.generated.sklearn.utils.validation.column_or_1d |
sklearn.utils.validation.has_fit_parameter
sklearn.utils.validation.has_fit_parameter(estimator, parameter) [source]
Checks whether the estimator’s fit method supports the given parameter. Parameters
estimatorobject
An estimator to inspect.
parameterstr
The searched parameter. Returns
is_parameter: bool
Whether the parameter was found to be a named parameter of the estimator’s fit method. Examples >>> from sklearn.svm import SVC
>>> has_fit_parameter(SVC(), "sample_weight")
True | sklearn.modules.generated.sklearn.utils.validation.has_fit_parameter |
sklearn.utils._safe_indexing
sklearn.utils._safe_indexing(X, indices, *, axis=0) [source]
Return rows, items or columns of X using indices. Warning This utility is documented, but private. This means that backward compatibility might be broken without any deprecation cycle. Parameters
Xarray-like, sparse-matrix, list, pandas.DataFrame, pandas.Series
Data from which to sample rows, items or columns. list are only supported when axis=0.
indicesbool, int, str, slice, array-like
If axis=0, boolean and integer array-like, integer slice, and scalar integer are supported.
If axis=1:
to select a single column, indices can be of int type for all X types and str only for dataframe. The selected subset will be 1D, unless X is a sparse matrix in which case it will be 2D. to select multiples columns, indices can be one of the following: list, array, slice. The type used in these containers can be one of the following: int, ‘bool’ and str. However, str is only supported when X is a dataframe. The selected subset will be 2D.
axisint, default=0
The axis along which X will be subsampled. axis=0 will select rows while axis=1 will select columns. Returns
subset
Subset of X on axis 0 or 1. Notes CSR, CSC, and LIL sparse matrices are supported. COO sparse matrices are not supported. | sklearn.modules.generated.sklearn.utils._safe_indexing |
sklearn.svm.l1_min_c(X, y, *, loss='squared_hinge', fit_intercept=True, intercept_scaling=1.0) [source]
Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty. This applies to l1 penalized classifiers, such as LinearSVC with penalty=’l1’ and linear_model.LogisticRegression with penalty=’l1’. This value is valid if class_weight parameter in fit() is not set. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
loss{‘squared_hinge’, ‘log’}, default=’squared_hinge’
Specifies the loss function. With ‘squared_hinge’ it is the squared hinge loss (a.k.a. L2 loss). With ‘log’ it is the loss of logistic regression models.
fit_interceptbool, default=True
Specifies if the intercept should be fitted by the model. It must match the fit() method parameter.
intercept_scalingfloat, default=1.0
when fit_intercept is True, instance vector x becomes [x, intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. It must match the fit() method parameter. Returns
l1_min_cfloat
minimum value for C | sklearn.modules.generated.sklearn.svm.l1_min_c#sklearn.svm.l1_min_c |
class sklearn.svm.LinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Classification. Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme. Read more in the User Guide. Parameters
penalty{‘l1’, ‘l2’}, default=’l2’
Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to coef_ vectors that are sparse.
loss{‘hinge’, ‘squared_hinge’}, default=’squared_hinge’
Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported.
dualbool, default=True
Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
multi_class{‘ovr’, ‘crammer_singer’}, default=’ovr’
Determines the multi-class strategy if y contains more than two classes. "ovr" trains n_classes one-vs-rest classifiers, while "crammer_singer" optimizes a joint objective over all classes. While crammer_singer is interesting from a theoretical perspective as it is consistent, it is seldom used in practice as it rarely leads to better accuracy and is more expensive to compute. If "crammer_singer" is chosen, the options loss, penalty and dual will be ignored.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).
intercept_scalingfloat, default=1
When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
class_weightdict or ‘balanced’, default=None
Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).
verboseint, default=0
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data for the dual coordinate descent (if dual=True). When dual=False the underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. See Glossary.
max_iterint, default=1000
The maximum number of iterations to be run. Attributes
coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is a readonly property derived from raw_coef_ that follows the internal memory layout of liblinear.
intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,)
Constants in decision function.
classes_ndarray of shape (n_classes,)
The unique classes labels.
n_iter_int
Maximum number of iterations run across all classes. See also
SVC
Implementation of Support Vector Machine classifier using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does. Furthermore SVC multi-class mode is implemented using one vs one scheme while LinearSVC uses one vs the rest. It is possible to implement one vs the rest with SVC by using the OneVsRestClassifier wrapper. Finally SVC can fit dense data without memory copy if the input is C-contiguous. Sparse data will still incur memory copy though.
sklearn.linear_model.SGDClassifier
SGDClassifier can optimize the same cost function as LinearSVC by adjusting the penalty and loss parameters. In addition it requires less memory, allows incremental (online) learning, and implements various loss functions and regularization regimes. Notes The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. The underlying implementation, liblinear, uses a sparse internal representation for the data that will incur a memory copy. Predict output may not match that of standalone liblinear in certain cases. See differences from liblinear in the narrative documentation. References LIBLINEAR: A Library for Large Linear Classification Examples >>> from sklearn.svm import LinearSVC
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_features=4, random_state=0)
>>> clf = make_pipeline(StandardScaler(),
... LinearSVC(random_state=0, tol=1e-5))
>>> clf.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvc', LinearSVC(random_state=0, tol=1e-05))])
>>> print(clf.named_steps['linearsvc'].coef_)
[[0.141... 0.526... 0.679... 0.493...]]
>>> print(clf.named_steps['linearsvc'].intercept_)
[0.1693...]
>>> print(clf.predict([[0, 0, 0, 0]]))
[1]
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns
selfobject
An instance of the estimator.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC |
sklearn.svm.LinearSVC
class sklearn.svm.LinearSVC(penalty='l2', loss='squared_hinge', *, dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Classification. Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme. Read more in the User Guide. Parameters
penalty{‘l1’, ‘l2’}, default=’l2’
Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to coef_ vectors that are sparse.
loss{‘hinge’, ‘squared_hinge’}, default=’squared_hinge’
Specifies the loss function. ‘hinge’ is the standard SVM loss (used e.g. by the SVC class) while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported.
dualbool, default=True
Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
multi_class{‘ovr’, ‘crammer_singer’}, default=’ovr’
Determines the multi-class strategy if y contains more than two classes. "ovr" trains n_classes one-vs-rest classifiers, while "crammer_singer" optimizes a joint objective over all classes. While crammer_singer is interesting from a theoretical perspective as it is consistent, it is seldom used in practice as it rarely leads to better accuracy and is more expensive to compute. If "crammer_singer" is chosen, the options loss, penalty and dual will be ignored.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).
intercept_scalingfloat, default=1
When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
class_weightdict or ‘balanced’, default=None
Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).
verboseint, default=0
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data for the dual coordinate descent (if dual=True). When dual=False the underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. See Glossary.
max_iterint, default=1000
The maximum number of iterations to be run. Attributes
coef_ndarray of shape (1, n_features) if n_classes == 2 else (n_classes, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is a readonly property derived from raw_coef_ that follows the internal memory layout of liblinear.
intercept_ndarray of shape (1,) if n_classes == 2 else (n_classes,)
Constants in decision function.
classes_ndarray of shape (n_classes,)
The unique classes labels.
n_iter_int
Maximum number of iterations run across all classes. See also
SVC
Implementation of Support Vector Machine classifier using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does. Furthermore SVC multi-class mode is implemented using one vs one scheme while LinearSVC uses one vs the rest. It is possible to implement one vs the rest with SVC by using the OneVsRestClassifier wrapper. Finally SVC can fit dense data without memory copy if the input is C-contiguous. Sparse data will still incur memory copy though.
sklearn.linear_model.SGDClassifier
SGDClassifier can optimize the same cost function as LinearSVC by adjusting the penalty and loss parameters. In addition it requires less memory, allows incremental (online) learning, and implements various loss functions and regularization regimes. Notes The underlying C implementation uses a random number generator to select features when fitting the model. It is thus not uncommon to have slightly different results for the same input data. If that happens, try with a smaller tol parameter. The underlying implementation, liblinear, uses a sparse internal representation for the data that will incur a memory copy. Predict output may not match that of standalone liblinear in certain cases. See differences from liblinear in the narrative documentation. References LIBLINEAR: A Library for Large Linear Classification Examples >>> from sklearn.svm import LinearSVC
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_features=4, random_state=0)
>>> clf = make_pipeline(StandardScaler(),
... LinearSVC(random_state=0, tol=1e-5))
>>> clf.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvc', LinearSVC(random_state=0, tol=1e-05))])
>>> print(clf.named_steps['linearsvc'].coef_)
[[0.141... 0.526... 0.679... 0.493...]]
>>> print(clf.named_steps['linearsvc'].intercept_)
[0.1693...]
>>> print(clf.predict([[0, 0, 0, 0]]))
[1]
Methods
decision_function(X) Predict confidence scores for samples.
densify() Convert coefficient matrix to dense array format.
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict class labels for samples in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
sparsify() Convert coefficient matrix to sparse format.
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted.
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns
selfobject
An instance of the estimator.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
Examples using sklearn.svm.LinearSVC
Release Highlights for scikit-learn 0.22
Comparison of Calibration of Classifiers
Probability Calibration curves
Pipeline Anova SVM
Univariate Feature Selection
Scalable learning with polynomial kernel aproximation
Explicit feature map approximation for RBF kernels
Detection error tradeoff (DET) curve
Balance model complexity and cross-validated score
Precision-Recall
Selecting dimensionality reduction with Pipeline and GridSearchCV
Column Transformer with Heterogeneous Data Sources
Feature discretization
Plot the support vectors in LinearSVC
Plot different SVM classifiers in the iris dataset
Scaling the regularization parameter for SVCs
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.svm.linearsvc |
decision_function(X) [source]
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes)
Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_[1] where >0 means this class would be predicted. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.decision_function |
densify() [source]
Convert coefficient matrix to dense array format. Converts the coef_ member (back) to a numpy.ndarray. This is the default format of coef_ and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns
self
Fitted estimator. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.densify |
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X.
sample_weightarray-like of shape (n_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns
selfobject
An instance of the estimator. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.get_params |
predict(X) [source]
Predict class labels for samples in X. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape [n_samples]
Predicted class label per sample. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.predict |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.set_params |
sparsify() [source]
Convert coefficient matrix to sparse format. Converts the coef_ member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The intercept_ member is not converted. Returns
self
Fitted estimator. Notes For non-sparse models, i.e. when there are not many zeros in coef_, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum(), must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. | sklearn.modules.generated.sklearn.svm.linearsvc#sklearn.svm.LinearSVC.sparsify |
class sklearn.svm.LinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Regression. Similar to SVR with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. This class supports both dense and sparse input. Read more in the User Guide. New in version 0.16. Parameters
epsilonfloat, default=0.0
Epsilon parameter in the epsilon-insensitive loss function. Note that the value of this parameter depends on the scale of the target variable y. If unsure, set epsilon=0.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
loss{‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’
Specifies the loss function. The epsilon-insensitive loss (standard SVR) is the L1 loss, while the squared epsilon-insensitive loss (‘squared_epsilon_insensitive’) is the L2 loss.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).
intercept_scalingfloat, default=1.
When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
dualbool, default=True
Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.
verboseint, default=0
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data. Pass an int for reproducible output across multiple function calls. See Glossary.
max_iterint, default=1000
The maximum number of iterations to be run. Attributes
coef_ndarray of shape (n_features) if n_classes == 2 else (n_classes, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is a readonly property derived from raw_coef_ that follows the internal memory layout of liblinear.
intercept_ndarray of shape (1) if n_classes == 2 else (n_classes)
Constants in decision function.
n_iter_int
Maximum number of iterations run across all classes. See also
LinearSVC
Implementation of Support Vector Machine classifier using the same library as this class (liblinear).
SVR
Implementation of Support Vector Machine regression using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does.
sklearn.linear_model.SGDRegressor
SGDRegressor can optimize the same cost function as LinearSVR by adjusting the penalty and loss parameters. In addition it requires less memory, allows incremental (online) learning, and implements various loss functions and regularization regimes. Examples >>> from sklearn.svm import LinearSVR
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=4, random_state=0)
>>> regr = make_pipeline(StandardScaler(),
... LinearSVR(random_state=0, tol=1e-5))
>>> regr.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvr', LinearSVR(random_state=0, tol=1e-05))])
>>> print(regr.named_steps['linearsvr'].coef_)
[18.582... 27.023... 44.357... 64.522...]
>>> print(regr.named_steps['linearsvr'].intercept_)
[-4...]
>>> print(regr.predict([[0, 0, 0, 0]]))
[-2.384...]
Methods
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X
sample_weightarray-like of shape (n_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns
selfobject
An instance of the estimator.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR |
sklearn.svm.LinearSVR
class sklearn.svm.LinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000) [source]
Linear Support Vector Regression. Similar to SVR with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. This class supports both dense and sparse input. Read more in the User Guide. New in version 0.16. Parameters
epsilonfloat, default=0.0
Epsilon parameter in the epsilon-insensitive loss function. Note that the value of this parameter depends on the scale of the target variable y. If unsure, set epsilon=0.
tolfloat, default=1e-4
Tolerance for stopping criteria.
Cfloat, default=1.0
Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.
loss{‘epsilon_insensitive’, ‘squared_epsilon_insensitive’}, default=’epsilon_insensitive’
Specifies the loss function. The epsilon-insensitive loss (standard SVR) is the L1 loss, while the squared epsilon-insensitive loss (‘squared_epsilon_insensitive’) is the L2 loss.
fit_interceptbool, default=True
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).
intercept_scalingfloat, default=1.
When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.
dualbool, default=True
Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.
verboseint, default=0
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data. Pass an int for reproducible output across multiple function calls. See Glossary.
max_iterint, default=1000
The maximum number of iterations to be run. Attributes
coef_ndarray of shape (n_features) if n_classes == 2 else (n_classes, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is a readonly property derived from raw_coef_ that follows the internal memory layout of liblinear.
intercept_ndarray of shape (1) if n_classes == 2 else (n_classes)
Constants in decision function.
n_iter_int
Maximum number of iterations run across all classes. See also
LinearSVC
Implementation of Support Vector Machine classifier using the same library as this class (liblinear).
SVR
Implementation of Support Vector Machine regression using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does.
sklearn.linear_model.SGDRegressor
SGDRegressor can optimize the same cost function as LinearSVR by adjusting the penalty and loss parameters. In addition it requires less memory, allows incremental (online) learning, and implements various loss functions and regularization regimes. Examples >>> from sklearn.svm import LinearSVR
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=4, random_state=0)
>>> regr = make_pipeline(StandardScaler(),
... LinearSVR(random_state=0, tol=1e-5))
>>> regr.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('linearsvr', LinearSVR(random_state=0, tol=1e-05))])
>>> print(regr.named_steps['linearsvr'].coef_)
[18.582... 27.023... 44.357... 64.522...]
>>> print(regr.named_steps['linearsvr'].intercept_)
[-4...]
>>> print(regr.predict([[0, 0, 0, 0]]))
[-2.384...]
Methods
fit(X, y[, sample_weight]) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the linear model.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X
sample_weightarray-like of shape (n_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns
selfobject
An instance of the estimator.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.linearsvr |
fit(X, y, sample_weight=None) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target vector relative to X
sample_weightarray-like of shape (n_samples,), default=None
Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. New in version 0.18. Returns
selfobject
An instance of the estimator. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.get_params |
predict(X) [source]
Predict using the linear model. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Samples. Returns
Carray, shape (n_samples,)
Returns predicted values. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.linearsvr#sklearn.svm.LinearSVR.set_params |
class sklearn.svm.NuSVC(*, nu=0.5, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None) [source]
Nu-Support Vector Classification. Similar to SVC but uses a parameter to control the number of support vectors. The implementation is based on libsvm. Read more in the User Guide. Parameters
nufloat, default=0.5
An upper bound on the fraction of margin errors (see User Guide) and a lower bound of the fraction of support vectors. Should be in the interval (0, 1].
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
degreeint, default=3
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
gamma{‘scale’, ‘auto’} or float, default=’scale’
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma, if ‘auto’, uses 1 / n_features. Changed in version 0.22: The default value of gamma changed from ‘auto’ to ‘scale’.
coef0float, default=0.0
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
shrinkingbool, default=True
Whether to use the shrinking heuristic. See the User Guide.
probabilitybool, default=False
Whether to enable probability estimates. This must be enabled prior to calling fit, will slow down that method as it internally uses 5-fold cross-validation, and predict_proba may be inconsistent with predict. Read more in the User Guide.
tolfloat, default=1e-3
Tolerance for stopping criterion.
cache_sizefloat, default=200
Specify the size of the kernel cache (in MB).
class_weight{dict, ‘balanced’}, default=None
Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies as n_samples / (n_classes * np.bincount(y))
verbosebool, default=False
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
max_iterint, default=-1
Hard limit on iterations within solver, or -1 for no limit.
decision_function_shape{‘ovo’, ‘ovr’}, default=’ovr’
Whether to return a one-vs-rest (‘ovr’) decision function of shape (n_samples, n_classes) as all other classifiers, or the original one-vs-one (‘ovo’) decision function of libsvm which has shape (n_samples, n_classes * (n_classes - 1) / 2). However, one-vs-one (‘ovo’) is always used as multi-class strategy. The parameter is ignored for binary classification. Changed in version 0.19: decision_function_shape is ‘ovr’ by default. New in version 0.17: decision_function_shape=’ovr’ is recommended. Changed in version 0.17: Deprecated decision_function_shape=’ovo’ and None.
break_tiesbool, default=False
If true, decision_function_shape='ovr', and number of classes > 2, predict will break ties according to the confidence values of decision_function; otherwise the first class among the tied classes is returned. Please note that breaking ties comes at a relatively high computational cost compared to a simple predict. New in version 0.22.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
class_weight_ndarray of shape (n_classes,)
Multipliers of parameter C of each class. Computed based on the class_weight parameter.
classes_ndarray of shape (n_classes,)
The unique classes labels.
coef_ndarray of shape (n_classes * (n_classes -1) / 2, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is readonly property derived from dual_coef_ and support_vectors_.
dual_coef_ndarray of shape (n_classes - 1, n_SV)
Dual coefficients of the support vector in the decision function (see Mathematical formulation), multiplied by their targets. For multiclass, coefficient for all 1-vs-1 classifiers. The layout of the coefficients in the multiclass case is somewhat non-trivial. See the multi-class section of the User Guide for details.
fit_status_int
0 if correctly fitted, 1 if the algorithm did not converge.
intercept_ndarray of shape (n_classes * (n_classes - 1) / 2,)
Constants in decision function.
support_ndarray of shape (n_SV,)
Indices of support vectors.
support_vectors_ndarray of shape (n_SV, n_features)
Support vectors.
n_support_ndarray of shape (n_classes,), dtype=int32
Number of support vectors for each class.
fit_status_int
0 if correctly fitted, 1 if the algorithm did not converge.
probA_ndarray of shape (n_classes * (n_classes - 1) / 2,)
probB_ndarray of shape (n_classes * (n_classes - 1) / 2,)
If probability=True, it corresponds to the parameters learned in Platt scaling to produce probability estimates from decision values. If probability=False, it’s an empty array. Platt scaling uses the logistic function 1 / (1 + exp(decision_value * probA_ + probB_)) where probA_ and probB_ are learned from the dataset [2]. For more information on the multiclass case and training procedure see section 8 of [1].
shape_fit_tuple of int of shape (n_dimensions_of_X,)
Array dimensions of training vector X. See also
SVC
Support Vector Machine for classification using libsvm.
LinearSVC
Scalable linear Support Vector Machine for classification using liblinear. References
1
LIBSVM: A Library for Support Vector Machines
2
Platt, John (1999). “Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.” Examples >>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> y = np.array([1, 1, 2, 2])
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.svm import NuSVC
>>> clf = make_pipeline(StandardScaler(), NuSVC())
>>> clf.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()), ('nusvc', NuSVC())])
>>> print(clf.predict([[-0.8, -1]]))
[1]
Methods
decision_function(X) Evaluates the decision function for the samples in X.
fit(X, y[, sample_weight]) Fit the SVM model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Perform classification on samples in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Evaluates the decision function for the samples in X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Xndarray of shape (n_samples, n_classes * (n_classes-1) / 2)
Returns the decision function of the sample for each class in the model. If decision_function_shape=’ovr’, the shape is (n_samples, n_classes). Notes If decision_function_shape=’ovo’, the function values are proportional to the distance of the samples X to the separating hyperplane. If the exact distances are required, divide the function values by the norm of the weight vector (coef_). See also this question for further details. If decision_function_shape=’ovr’, the decision function is a monotonic transformation of ovo decision function.
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples).
yarray-like of shape (n_samples,)
Target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns
selfobject
Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,)
Class labels for samples in X.
property predict_log_proba
Compute log probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
Tndarray of shape (n_samples, n_classes)
Returns the log-probabilities of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
property predict_proba
Compute probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
Tndarray of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC |
sklearn.svm.NuSVC
class sklearn.svm.NuSVC(*, nu=0.5, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None) [source]
Nu-Support Vector Classification. Similar to SVC but uses a parameter to control the number of support vectors. The implementation is based on libsvm. Read more in the User Guide. Parameters
nufloat, default=0.5
An upper bound on the fraction of margin errors (see User Guide) and a lower bound of the fraction of support vectors. Should be in the interval (0, 1].
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
degreeint, default=3
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
gamma{‘scale’, ‘auto’} or float, default=’scale’
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma, if ‘auto’, uses 1 / n_features. Changed in version 0.22: The default value of gamma changed from ‘auto’ to ‘scale’.
coef0float, default=0.0
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
shrinkingbool, default=True
Whether to use the shrinking heuristic. See the User Guide.
probabilitybool, default=False
Whether to enable probability estimates. This must be enabled prior to calling fit, will slow down that method as it internally uses 5-fold cross-validation, and predict_proba may be inconsistent with predict. Read more in the User Guide.
tolfloat, default=1e-3
Tolerance for stopping criterion.
cache_sizefloat, default=200
Specify the size of the kernel cache (in MB).
class_weight{dict, ‘balanced’}, default=None
Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies as n_samples / (n_classes * np.bincount(y))
verbosebool, default=False
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
max_iterint, default=-1
Hard limit on iterations within solver, or -1 for no limit.
decision_function_shape{‘ovo’, ‘ovr’}, default=’ovr’
Whether to return a one-vs-rest (‘ovr’) decision function of shape (n_samples, n_classes) as all other classifiers, or the original one-vs-one (‘ovo’) decision function of libsvm which has shape (n_samples, n_classes * (n_classes - 1) / 2). However, one-vs-one (‘ovo’) is always used as multi-class strategy. The parameter is ignored for binary classification. Changed in version 0.19: decision_function_shape is ‘ovr’ by default. New in version 0.17: decision_function_shape=’ovr’ is recommended. Changed in version 0.17: Deprecated decision_function_shape=’ovo’ and None.
break_tiesbool, default=False
If true, decision_function_shape='ovr', and number of classes > 2, predict will break ties according to the confidence values of decision_function; otherwise the first class among the tied classes is returned. Please note that breaking ties comes at a relatively high computational cost compared to a simple predict. New in version 0.22.
random_stateint, RandomState instance or None, default=None
Controls the pseudo random number generation for shuffling the data for probability estimates. Ignored when probability is False. Pass an int for reproducible output across multiple function calls. See Glossary. Attributes
class_weight_ndarray of shape (n_classes,)
Multipliers of parameter C of each class. Computed based on the class_weight parameter.
classes_ndarray of shape (n_classes,)
The unique classes labels.
coef_ndarray of shape (n_classes * (n_classes -1) / 2, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is readonly property derived from dual_coef_ and support_vectors_.
dual_coef_ndarray of shape (n_classes - 1, n_SV)
Dual coefficients of the support vector in the decision function (see Mathematical formulation), multiplied by their targets. For multiclass, coefficient for all 1-vs-1 classifiers. The layout of the coefficients in the multiclass case is somewhat non-trivial. See the multi-class section of the User Guide for details.
fit_status_int
0 if correctly fitted, 1 if the algorithm did not converge.
intercept_ndarray of shape (n_classes * (n_classes - 1) / 2,)
Constants in decision function.
support_ndarray of shape (n_SV,)
Indices of support vectors.
support_vectors_ndarray of shape (n_SV, n_features)
Support vectors.
n_support_ndarray of shape (n_classes,), dtype=int32
Number of support vectors for each class.
fit_status_int
0 if correctly fitted, 1 if the algorithm did not converge.
probA_ndarray of shape (n_classes * (n_classes - 1) / 2,)
probB_ndarray of shape (n_classes * (n_classes - 1) / 2,)
If probability=True, it corresponds to the parameters learned in Platt scaling to produce probability estimates from decision values. If probability=False, it’s an empty array. Platt scaling uses the logistic function 1 / (1 + exp(decision_value * probA_ + probB_)) where probA_ and probB_ are learned from the dataset [2]. For more information on the multiclass case and training procedure see section 8 of [1].
shape_fit_tuple of int of shape (n_dimensions_of_X,)
Array dimensions of training vector X. See also
SVC
Support Vector Machine for classification using libsvm.
LinearSVC
Scalable linear Support Vector Machine for classification using liblinear. References
1
LIBSVM: A Library for Support Vector Machines
2
Platt, John (1999). “Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.” Examples >>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> y = np.array([1, 1, 2, 2])
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> from sklearn.svm import NuSVC
>>> clf = make_pipeline(StandardScaler(), NuSVC())
>>> clf.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()), ('nusvc', NuSVC())])
>>> print(clf.predict([[-0.8, -1]]))
[1]
Methods
decision_function(X) Evaluates the decision function for the samples in X.
fit(X, y[, sample_weight]) Fit the SVM model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Perform classification on samples in X.
score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
decision_function(X) [source]
Evaluates the decision function for the samples in X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Xndarray of shape (n_samples, n_classes * (n_classes-1) / 2)
Returns the decision function of the sample for each class in the model. If decision_function_shape=’ovr’, the shape is (n_samples, n_classes). Notes If decision_function_shape=’ovo’, the function values are proportional to the distance of the samples X to the separating hyperplane. If the exact distances are required, divide the function values by the norm of the weight vector (coef_). See also this question for further details. If decision_function_shape=’ovr’, the decision function is a monotonic transformation of ovo decision function.
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples).
yarray-like of shape (n_samples,)
Target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns
selfobject
Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,)
Class labels for samples in X.
property predict_log_proba
Compute log probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
Tndarray of shape (n_samples, n_classes)
Returns the log-probabilities of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
property predict_proba
Compute probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
Tndarray of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.svm.NuSVC
Non-linear SVM | sklearn.modules.generated.sklearn.svm.nusvc |
decision_function(X) [source]
Evaluates the decision function for the samples in X. Parameters
Xarray-like of shape (n_samples, n_features)
Returns
Xndarray of shape (n_samples, n_classes * (n_classes-1) / 2)
Returns the decision function of the sample for each class in the model. If decision_function_shape=’ovr’, the shape is (n_samples, n_classes). Notes If decision_function_shape=’ovo’, the function values are proportional to the distance of the samples X to the separating hyperplane. If the exact distances are required, divide the function values by the norm of the weight vector (coef_). See also this question for further details. If decision_function_shape=’ovr’, the decision function is a monotonic transformation of ovo decision function. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.decision_function |
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples).
yarray-like of shape (n_samples,)
Target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns
selfobject
Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.get_params |
predict(X) [source]
Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,)
Class labels for samples in X. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.predict |
property predict_log_proba
Compute log probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features) or (n_samples_test, n_samples_train)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
Tndarray of shape (n_samples, n_classes)
Returns the log-probabilities of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.predict_log_proba |
property predict_proba
Compute probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute probability set to True. Parameters
Xarray-like of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
Tndarray of shape (n_samples, n_classes)
Returns the probability of the sample for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute classes_. Notes The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.predict_proba |
score(X, y, sample_weight=None) [source]
Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) wrt. y. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.nusvc#sklearn.svm.NuSVC.set_params |
class sklearn.svm.NuSVR(*, nu=0.5, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter=- 1) [source]
Nu Support Vector Regression. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuSVC, where nu replaces C, here nu replaces the parameter epsilon of epsilon-SVR. The implementation is based on libsvm. Read more in the User Guide. Parameters
nufloat, default=0.5
An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
Cfloat, default=1.0
Penalty parameter C of the error term.
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
degreeint, default=3
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
gamma{‘scale’, ‘auto’} or float, default=’scale’
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma, if ‘auto’, uses 1 / n_features. Changed in version 0.22: The default value of gamma changed from ‘auto’ to ‘scale’.
coef0float, default=0.0
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
shrinkingbool, default=True
Whether to use the shrinking heuristic. See the User Guide.
tolfloat, default=1e-3
Tolerance for stopping criterion.
cache_sizefloat, default=200
Specify the size of the kernel cache (in MB).
verbosebool, default=False
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
max_iterint, default=-1
Hard limit on iterations within solver, or -1 for no limit. Attributes
class_weight_ndarray of shape (n_classes,)
Multipliers of parameter C for each class. Computed based on the class_weight parameter.
coef_ndarray of shape (1, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is readonly property derived from dual_coef_ and support_vectors_.
dual_coef_ndarray of shape (1, n_SV)
Coefficients of the support vector in the decision function.
fit_status_int
0 if correctly fitted, 1 otherwise (will raise warning)
intercept_ndarray of shape (1,)
Constants in decision function.
n_support_ndarray of shape (n_classes,), dtype=int32
Number of support vectors for each class.
shape_fit_tuple of int of shape (n_dimensions_of_X,)
Array dimensions of training vector X.
support_ndarray of shape (n_SV,)
Indices of support vectors.
support_vectors_ndarray of shape (n_SV, n_features)
Support vectors. See also
NuSVC
Support Vector Machine for classification implemented with libsvm with a parameter to control the number of support vectors.
SVR
Epsilon Support Vector Machine for regression implemented with libsvm. References
1
LIBSVM: A Library for Support Vector Machines
2
Platt, John (1999). “Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.” Examples >>> from sklearn.svm import NuSVR
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> regr = make_pipeline(StandardScaler(), NuSVR(C=1.0, nu=0.1))
>>> regr.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('nusvr', NuSVR(nu=0.1))])
Methods
fit(X, y[, sample_weight]) Fit the SVM model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Perform regression on samples in X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples).
yarray-like of shape (n_samples,)
Target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns
selfobject
Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Perform regression on samples in X. For an one-class model, +1 (inlier) or -1 (outlier) is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,)
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR |
sklearn.svm.NuSVR
class sklearn.svm.NuSVR(*, nu=0.5, C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter=- 1) [source]
Nu Support Vector Regression. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuSVC, where nu replaces C, here nu replaces the parameter epsilon of epsilon-SVR. The implementation is based on libsvm. Read more in the User Guide. Parameters
nufloat, default=0.5
An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
Cfloat, default=1.0
Penalty parameter C of the error term.
kernel{‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.
degreeint, default=3
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.
gamma{‘scale’, ‘auto’} or float, default=’scale’
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’. if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma, if ‘auto’, uses 1 / n_features. Changed in version 0.22: The default value of gamma changed from ‘auto’ to ‘scale’.
coef0float, default=0.0
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.
shrinkingbool, default=True
Whether to use the shrinking heuristic. See the User Guide.
tolfloat, default=1e-3
Tolerance for stopping criterion.
cache_sizefloat, default=200
Specify the size of the kernel cache (in MB).
verbosebool, default=False
Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in libsvm that, if enabled, may not work properly in a multithreaded context.
max_iterint, default=-1
Hard limit on iterations within solver, or -1 for no limit. Attributes
class_weight_ndarray of shape (n_classes,)
Multipliers of parameter C for each class. Computed based on the class_weight parameter.
coef_ndarray of shape (1, n_features)
Weights assigned to the features (coefficients in the primal problem). This is only available in the case of a linear kernel. coef_ is readonly property derived from dual_coef_ and support_vectors_.
dual_coef_ndarray of shape (1, n_SV)
Coefficients of the support vector in the decision function.
fit_status_int
0 if correctly fitted, 1 otherwise (will raise warning)
intercept_ndarray of shape (1,)
Constants in decision function.
n_support_ndarray of shape (n_classes,), dtype=int32
Number of support vectors for each class.
shape_fit_tuple of int of shape (n_dimensions_of_X,)
Array dimensions of training vector X.
support_ndarray of shape (n_SV,)
Indices of support vectors.
support_vectors_ndarray of shape (n_SV, n_features)
Support vectors. See also
NuSVC
Support Vector Machine for classification implemented with libsvm with a parameter to control the number of support vectors.
SVR
Epsilon Support Vector Machine for regression implemented with libsvm. References
1
LIBSVM: A Library for Support Vector Machines
2
Platt, John (1999). “Probabilistic outputs for support vector machines and comparison to regularizedlikelihood methods.” Examples >>> from sklearn.svm import NuSVR
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.preprocessing import StandardScaler
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> regr = make_pipeline(StandardScaler(), NuSVR(C=1.0, nu=0.1))
>>> regr.fit(X, y)
Pipeline(steps=[('standardscaler', StandardScaler()),
('nusvr', NuSVR(nu=0.1))])
Methods
fit(X, y[, sample_weight]) Fit the SVM model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Perform regression on samples in X.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples).
yarray-like of shape (n_samples,)
Target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns
selfobject
Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Perform regression on samples in X. For an one-class model, +1 (inlier) or -1 (outlier) is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,)
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.svm.NuSVR
Model Complexity Influence | sklearn.modules.generated.sklearn.svm.nusvr |
fit(X, y, sample_weight=None) [source]
Fit the SVM model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the expected shape of X is (n_samples, n_samples).
yarray-like of shape (n_samples,)
Target values (class labels in classification, real numbers in regression).
sample_weightarray-like of shape (n_samples,), default=None
Per-sample weights. Rescale C per sample. Higher weights force the classifier to put more emphasis on these points. Returns
selfobject
Notes If X and y are not C-ordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied. If X is a dense array, then the other methods will not support sparse matrices as input. | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR.get_params |
predict(X) [source]
Perform regression on samples in X. For an one-class model, +1 (inlier) or -1 (outlier) is returned. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
For kernel=”precomputed”, the expected shape of X is (n_samples_test, n_samples_train). Returns
y_predndarray of shape (n_samples,) | sklearn.modules.generated.sklearn.svm.nusvr#sklearn.svm.NuSVR.predict |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.