doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
get_indices(i) [source]
Row and column indices of the i’th bicluster. Only works if rows_ and columns_ attributes exist. Parameters
iint
The index of the cluster. Returns
row_indndarray, dtype=np.intp
Indices of rows in the dataset that belong to the bicluster.
col_indndarray, dtype=np.intp
Indices of columns in the dataset that belong to the bicluster. | sklearn.modules.generated.sklearn.cluster.spectralcoclustering#sklearn.cluster.SpectralCoclustering.get_indices |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.cluster.spectralcoclustering#sklearn.cluster.SpectralCoclustering.get_params |
get_shape(i) [source]
Shape of the i’th bicluster. Parameters
iint
The index of the cluster. Returns
n_rowsint
Number of rows in the bicluster.
n_colsint
Number of columns in the bicluster. | sklearn.modules.generated.sklearn.cluster.spectralcoclustering#sklearn.cluster.SpectralCoclustering.get_shape |
get_submatrix(i, data) [source]
Return the submatrix corresponding to bicluster i. Parameters
iint
The index of the cluster.
dataarray-like of shape (n_samples, n_features)
The data. Returns
submatrixndarray of shape (n_rows, n_cols)
The submatrix corresponding to bicluster i. Notes Works with sparse matrices. Only works if rows_ and columns_ attributes exist. | sklearn.modules.generated.sklearn.cluster.spectralcoclustering#sklearn.cluster.SpectralCoclustering.get_submatrix |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.cluster.spectralcoclustering#sklearn.cluster.SpectralCoclustering.set_params |
sklearn.cluster.spectral_clustering(affinity, *, n_clusters=8, n_components=None, eigen_solver=None, random_state=None, n_init=10, eigen_tol=0.0, assign_labels='kmeans', verbose=False) [source]
Apply clustering to a projection of the normalized Laplacian. In practice Spectral Clustering is very useful when the structure of the individual clusters is highly non-convex or more generally when a measure of the center and spread of the cluster is not a suitable description of the complete cluster. For instance, when clusters are nested circles on the 2D plane. If affinity is the adjacency matrix of a graph, this method can be used to find normalized graph cuts. Read more in the User Guide. Parameters
affinity{array-like, sparse matrix} of shape (n_samples, n_samples)
The affinity matrix describing the relationship of the samples to embed. Must be symmetric. Possible examples:
adjacency matrix of a graph, heat kernel of the pairwise distance matrix of the samples, symmetric k-nearest neighbours connectivity matrix of the samples.
n_clustersint, default=None
Number of clusters to extract.
n_componentsint, default=n_clusters
Number of eigen vectors to use for the spectral embedding
eigen_solver{None, ‘arpack’, ‘lobpcg’, or ‘amg’}
The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used.
random_stateint, RandomState instance, default=None
A pseudo random number generator used for the initialization of the lobpcg eigen vectors decomposition when eigen_solver == ‘amg’ and by the K-Means initialization. Use an int to make the randomness deterministic. See Glossary.
n_initint, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
eigen_tolfloat, default=0.0
Stopping criterion for eigendecomposition of the Laplacian matrix when using arpack eigen_solver.
assign_labels{‘kmeans’, ‘discretize’}, default=’kmeans’
The strategy to use to assign labels in the embedding space. There are two ways to assign labels after the laplacian embedding. k-means can be applied and is a popular choice. But it can also be sensitive to initialization. Discretization is another approach which is less sensitive to random initialization. See the ‘Multiclass spectral clustering’ paper referenced below for more details on the discretization approach.
verbosebool, default=False
Verbosity mode. New in version 0.24. Returns
labelsarray of integers, shape: n_samples
The labels of the clusters. Notes The graph should contain only one connect component, elsewhere the results make little sense. This algorithm solves the normalized cut for k=2: it is a normalized spectral clustering. References Normalized cuts and image segmentation, 2000 Jianbo Shi, Jitendra Malik http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.2324
A Tutorial on Spectral Clustering, 2007 Ulrike von Luxburg http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.9323
Multiclass spectral clustering, 2003 Stella X. Yu, Jianbo Shi https://www1.icsi.berkeley.edu/~stellayu/publication/doc/2003kwayICCV.pdf | sklearn.modules.generated.sklearn.cluster.spectral_clustering#sklearn.cluster.spectral_clustering |
sklearn.cluster.ward_tree(X, *, connectivity=None, n_clusters=None, return_distance=False) [source]
Ward clustering based on a Feature matrix. Recursively merges the pair of clusters that minimally increases within-cluster variance. The inertia matrix uses a Heapq-based representation. This is the structured version, that takes into account some topological structure between samples. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
feature matrix representing n_samples samples to be clustered
connectivitysparse matrix, default=None
connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. The matrix is assumed to be symmetric and only the upper triangular half is used. Default is None, i.e, the Ward algorithm is unstructured.
n_clustersint, default=None
Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. In this case, the complete tree is not computed, thus the ‘children’ output is of limited use, and the ‘parents’ output should rather be used. This option is valid only when specifying a connectivity matrix.
return_distancebool, default=None
If True, return the distance between the clusters. Returns
childrenndarray of shape (n_nodes-1, 2)
The children of each non-leaf node. Values less than n_samples correspond to leaves of the tree which are the original samples. A node i greater than or equal to n_samples is a non-leaf node and has children children_[i - n_samples]. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
n_connected_componentsint
The number of connected components in the graph.
n_leavesint
The number of leaves in the tree
parentsndarray of shape (n_nodes,) or None
The parent of each node. Only returned when a connectivity matrix is specified, elsewhere ‘None’ is returned.
distancesndarray of shape (n_nodes-1,)
Only returned if return_distance is set to True (for compatibility). The distances between the centers of the nodes. distances[i] corresponds to a weighted euclidean distance between the nodes children[i, 1] and children[i, 2]. If the nodes refer to leaves of the tree, then distances[i] is their unweighted euclidean distance. Distances are updated in the following way (from scipy.hierarchy.linkage): The new entry \(d(u,v)\) is computed as follows, \[d(u,v) = \sqrt{\frac{|v|+|s|} {T}d(v,s)^2 + \frac{|v|+|t|} {T}d(v,t)^2 - \frac{|v|} {T}d(s,t)^2}\] where \(u\) is the newly joined cluster consisting of clusters \(s\) and \(t\), \(v\) is an unused cluster in the forest, \(T=|v|+|s|+|t|\), and \(|*|\) is the cardinality of its argument. This is also known as the incremental algorithm. | sklearn.modules.generated.sklearn.cluster.ward_tree#sklearn.cluster.ward_tree |
class sklearn.compose.ColumnTransformer(transformers, *, remainder='drop', sparse_threshold=0.3, n_jobs=None, transformer_weights=None, verbose=False) [source]
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each transformer will be concatenated to form a single feature space. This is useful for heterogeneous or columnar data, to combine several feature extraction mechanisms or transformations into a single transformer. Read more in the User Guide. New in version 0.20. Parameters
transformerslist of tuples
List of (name, transformer, columns) tuples specifying the transformer objects to be applied to subsets of the data.
namestr
Like in Pipeline and FeatureUnion, this allows the transformer and its parameters to be set using set_params and searched in grid search.
transformer{‘drop’, ‘passthrough’} or estimator
Estimator must support fit and transform. Special-cased strings ‘drop’ and ‘passthrough’ are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively.
columnsstr, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where transformer expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data X and can return any of the above. To select multiple columns by name or dtype, you can use make_column_selector.
remainder{‘drop’, ‘passthrough’} or estimator, default=’drop’
By default, only the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. (default of 'drop'). By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting remainder to be an estimator, the remaining non-specified columns will use the remainder estimator. The estimator must support fit and transform. Note that using this feature requires that the DataFrame columns input at fit and transform have identical order.
sparse_thresholdfloat, default=0.3
If the output of the different transformers contains sparse matrices, these will be stacked as a sparse matrix if the overall density is lower than this value. Use sparse_threshold=0 to always return dense. When the transformed output consists of all dense data, the stacked result will be dense, and this keyword will be ignored.
n_jobsint, default=None
Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
transformer_weightsdict, default=None
Multiplicative weights for features per transformer. The output of the transformer is multiplied by these weights. Keys are transformer names, values the weights.
verbosebool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed. Attributes
transformers_list
The collection of fitted transformers as tuples of (name, fitted_transformer, column). fitted_transformer can be an estimator, ‘drop’, or ‘passthrough’. In case there were no columns selected, this will be the unfitted transformer. If there are remaining columns, the final element is a tuple of the form: (‘remainder’, transformer, remaining_columns) corresponding to the remainder parameter. If there are remaining columns, then len(transformers_)==len(transformers)+1, otherwise len(transformers_)==len(transformers).
named_transformers_Bunch
Access the fitted transformer by name.
sparse_output_bool
Boolean flag indicating whether the output of transform is a sparse matrix or a dense numpy array, which depends on the output of the individual transformers and the sparse_threshold keyword. See also
make_column_transformer
Convenience function for combining the outputs of multiple transformer objects applied to column subsets of the original feature space.
make_column_selector
Convenience function for selecting columns based on datatype or the columns name with a regex pattern. Notes The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the transformers list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the passthrough keyword. Those columns specified with passthrough are added at the right to the output of the transformers. Examples >>> import numpy as np
>>> from sklearn.compose import ColumnTransformer
>>> from sklearn.preprocessing import Normalizer
>>> ct = ColumnTransformer(
... [("norm1", Normalizer(norm='l1'), [0, 1]),
... ("norm2", Normalizer(norm='l1'), slice(2, 4))])
>>> X = np.array([[0., 1., 2., 2.],
... [1., 1., 0., 1.]])
>>> # Normalizer scales each row of X to unit norm. A separate scaling
>>> # is applied for the two first and two last elements of each
>>> # row independently.
>>> ct.fit_transform(X)
array([[0. , 1. , 0.5, 0.5],
[0.5, 0.5, 0. , 1. ]])
Methods
fit(X[, y]) Fit all transformers using X.
fit_transform(X[, y]) Fit all transformers, transform the data and concatenate results.
get_feature_names() Get feature names from all transformers.
get_params([deep]) Get parameters for this estimator.
set_params(**kwargs) Set the parameters of this estimator.
transform(X) Transform X separately by each transformer, concatenate results.
fit(X, y=None) [source]
Fit all transformers using X. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,…), default=None
Targets for supervised learning. Returns
selfColumnTransformer
This estimator
fit_transform(X, y=None) [source]
Fit all transformers, transform the data and concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,), default=None
Targets for supervised learning. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices.
get_feature_names() [source]
Get feature names from all transformers. Returns
feature_nameslist of strings
Names of the features produced by transform.
get_params(deep=True) [source]
Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformers of the ColumnTransformer. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property named_transformers_
Access the fitted transformer by name. Read-only attribute to access any transformer by given name. Keys are transformer names and values are the fitted transformer objects.
set_params(**kwargs) [source]
Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in transformers of ColumnTransformer. Returns
self
transform(X) [source]
Transform X separately by each transformer, concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
The data to be transformed by subset. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices. | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer |
sklearn.compose.ColumnTransformer
class sklearn.compose.ColumnTransformer(transformers, *, remainder='drop', sparse_threshold=0.3, n_jobs=None, transformer_weights=None, verbose=False) [source]
Applies transformers to columns of an array or pandas DataFrame. This estimator allows different columns or column subsets of the input to be transformed separately and the features generated by each transformer will be concatenated to form a single feature space. This is useful for heterogeneous or columnar data, to combine several feature extraction mechanisms or transformations into a single transformer. Read more in the User Guide. New in version 0.20. Parameters
transformerslist of tuples
List of (name, transformer, columns) tuples specifying the transformer objects to be applied to subsets of the data.
namestr
Like in Pipeline and FeatureUnion, this allows the transformer and its parameters to be set using set_params and searched in grid search.
transformer{‘drop’, ‘passthrough’} or estimator
Estimator must support fit and transform. Special-cased strings ‘drop’ and ‘passthrough’ are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively.
columnsstr, array-like of str, int, array-like of int, array-like of bool, slice or callable
Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where transformer expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data X and can return any of the above. To select multiple columns by name or dtype, you can use make_column_selector.
remainder{‘drop’, ‘passthrough’} or estimator, default=’drop’
By default, only the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. (default of 'drop'). By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting remainder to be an estimator, the remaining non-specified columns will use the remainder estimator. The estimator must support fit and transform. Note that using this feature requires that the DataFrame columns input at fit and transform have identical order.
sparse_thresholdfloat, default=0.3
If the output of the different transformers contains sparse matrices, these will be stacked as a sparse matrix if the overall density is lower than this value. Use sparse_threshold=0 to always return dense. When the transformed output consists of all dense data, the stacked result will be dense, and this keyword will be ignored.
n_jobsint, default=None
Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
transformer_weightsdict, default=None
Multiplicative weights for features per transformer. The output of the transformer is multiplied by these weights. Keys are transformer names, values the weights.
verbosebool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed. Attributes
transformers_list
The collection of fitted transformers as tuples of (name, fitted_transformer, column). fitted_transformer can be an estimator, ‘drop’, or ‘passthrough’. In case there were no columns selected, this will be the unfitted transformer. If there are remaining columns, the final element is a tuple of the form: (‘remainder’, transformer, remaining_columns) corresponding to the remainder parameter. If there are remaining columns, then len(transformers_)==len(transformers)+1, otherwise len(transformers_)==len(transformers).
named_transformers_Bunch
Access the fitted transformer by name.
sparse_output_bool
Boolean flag indicating whether the output of transform is a sparse matrix or a dense numpy array, which depends on the output of the individual transformers and the sparse_threshold keyword. See also
make_column_transformer
Convenience function for combining the outputs of multiple transformer objects applied to column subsets of the original feature space.
make_column_selector
Convenience function for selecting columns based on datatype or the columns name with a regex pattern. Notes The order of the columns in the transformed feature matrix follows the order of how the columns are specified in the transformers list. Columns of the original feature matrix that are not specified are dropped from the resulting transformed feature matrix, unless specified in the passthrough keyword. Those columns specified with passthrough are added at the right to the output of the transformers. Examples >>> import numpy as np
>>> from sklearn.compose import ColumnTransformer
>>> from sklearn.preprocessing import Normalizer
>>> ct = ColumnTransformer(
... [("norm1", Normalizer(norm='l1'), [0, 1]),
... ("norm2", Normalizer(norm='l1'), slice(2, 4))])
>>> X = np.array([[0., 1., 2., 2.],
... [1., 1., 0., 1.]])
>>> # Normalizer scales each row of X to unit norm. A separate scaling
>>> # is applied for the two first and two last elements of each
>>> # row independently.
>>> ct.fit_transform(X)
array([[0. , 1. , 0.5, 0.5],
[0.5, 0.5, 0. , 1. ]])
Methods
fit(X[, y]) Fit all transformers using X.
fit_transform(X[, y]) Fit all transformers, transform the data and concatenate results.
get_feature_names() Get feature names from all transformers.
get_params([deep]) Get parameters for this estimator.
set_params(**kwargs) Set the parameters of this estimator.
transform(X) Transform X separately by each transformer, concatenate results.
fit(X, y=None) [source]
Fit all transformers using X. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,…), default=None
Targets for supervised learning. Returns
selfColumnTransformer
This estimator
fit_transform(X, y=None) [source]
Fit all transformers, transform the data and concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,), default=None
Targets for supervised learning. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices.
get_feature_names() [source]
Get feature names from all transformers. Returns
feature_nameslist of strings
Names of the features produced by transform.
get_params(deep=True) [source]
Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformers of the ColumnTransformer. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
property named_transformers_
Access the fitted transformer by name. Read-only attribute to access any transformer by given name. Keys are transformer names and values are the fitted transformer objects.
set_params(**kwargs) [source]
Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in transformers of ColumnTransformer. Returns
self
transform(X) [source]
Transform X separately by each transformer, concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
The data to be transformed by subset. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices.
Examples using sklearn.compose.ColumnTransformer
Poisson regression and non-normal loss
Tweedie regression on insurance claims
Permutation Importance vs Random Forest Feature Importance (MDI)
Column Transformer with Mixed Types
Column Transformer with Heterogeneous Data Sources | sklearn.modules.generated.sklearn.compose.columntransformer |
fit(X, y=None) [source]
Fit all transformers using X. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,…), default=None
Targets for supervised learning. Returns
selfColumnTransformer
This estimator | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.fit |
fit_transform(X, y=None) [source]
Fit all transformers, transform the data and concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
Input data, of which specified subsets are used to fit the transformers.
yarray-like of shape (n_samples,), default=None
Targets for supervised learning. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices. | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.fit_transform |
get_feature_names() [source]
Get feature names from all transformers. Returns
feature_nameslist of strings
Names of the features produced by transform. | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.get_feature_names |
get_params(deep=True) [source]
Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformers of the ColumnTransformer. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.get_params |
property named_transformers_
Access the fitted transformer by name. Read-only attribute to access any transformer by given name. Keys are transformer names and values are the fitted transformer objects. | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.named_transformers_ |
set_params(**kwargs) [source]
Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in transformers of ColumnTransformer. Returns
self | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.set_params |
transform(X) [source]
Transform X separately by each transformer, concatenate results. Parameters
X{array-like, dataframe} of shape (n_samples, n_features)
The data to be transformed by subset. Returns
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. If any result is a sparse matrix, everything will be converted to sparse matrices. | sklearn.modules.generated.sklearn.compose.columntransformer#sklearn.compose.ColumnTransformer.transform |
sklearn.compose.make_column_selector(pattern=None, *, dtype_include=None, dtype_exclude=None) [source]
Create a callable to select columns to be used with ColumnTransformer. make_column_selector can select columns based on datatype or the columns name with a regex. When using multiple selection criteria, all criteria must match for a column to be selected. Parameters
patternstr, default=None
Name of columns containing this regex pattern will be included. If None, column selection will not be selected based on pattern.
dtype_includecolumn dtype or list of column dtypes, default=None
A selection of dtypes to include. For more details, see pandas.DataFrame.select_dtypes.
dtype_excludecolumn dtype or list of column dtypes, default=None
A selection of dtypes to exclude. For more details, see pandas.DataFrame.select_dtypes. Returns
selectorcallable
Callable for column selection to be used by a ColumnTransformer. See also
ColumnTransformer
Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples >>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> from sklearn.compose import make_column_transformer
>>> from sklearn.compose import make_column_selector
>>> import pandas as pd
>>> X = pd.DataFrame({'city': ['London', 'London', 'Paris', 'Sallisaw'],
... 'rating': [5, 3, 4, 5]})
>>> ct = make_column_transformer(
... (StandardScaler(),
... make_column_selector(dtype_include=np.number)), # rating
... (OneHotEncoder(),
... make_column_selector(dtype_include=object))) # city
>>> ct.fit_transform(X)
array([[ 0.90453403, 1. , 0. , 0. ],
[-1.50755672, 1. , 0. , 0. ],
[-0.30151134, 0. , 1. , 0. ],
[ 0.90453403, 0. , 0. , 1. ]]) | sklearn.modules.generated.sklearn.compose.make_column_selector#sklearn.compose.make_column_selector |
sklearn.compose.make_column_transformer(*transformers, remainder='drop', sparse_threshold=0.3, n_jobs=None, verbose=False) [source]
Construct a ColumnTransformer from the given transformers. This is a shorthand for the ColumnTransformer constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names automatically based on their types. It also does not allow weighting with transformer_weights. Read more in the User Guide. Parameters
*transformerstuples
Tuples of the form (transformer, columns) specifying the transformer objects to be applied to subsets of the data.
transformer{‘drop’, ‘passthrough’} or estimator
Estimator must support fit and transform. Special-cased strings ‘drop’ and ‘passthrough’ are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively.
columnsstr, array-like of str, int, array-like of int, slice, array-like of bool or callable
Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where transformer expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data X and can return any of the above. To select multiple columns by name or dtype, you can use make_column_selector.
remainder{‘drop’, ‘passthrough’} or estimator, default=’drop’
By default, only the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. (default of 'drop'). By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting remainder to be an estimator, the remaining non-specified columns will use the remainder estimator. The estimator must support fit and transform.
sparse_thresholdfloat, default=0.3
If the transformed output consists of a mix of sparse and dense data, it will be stacked as a sparse matrix if the density is lower than this value. Use sparse_threshold=0 to always return dense. When the transformed output consists of all sparse or all dense data, the stacked result will be sparse or dense, respectively, and this keyword will be ignored.
n_jobsint, default=None
Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verbosebool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed. Returns
ctColumnTransformer
See also
ColumnTransformer
Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples >>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> from sklearn.compose import make_column_transformer
>>> make_column_transformer(
... (StandardScaler(), ['numerical_column']),
... (OneHotEncoder(), ['categorical_column']))
ColumnTransformer(transformers=[('standardscaler', StandardScaler(...),
['numerical_column']),
('onehotencoder', OneHotEncoder(...),
['categorical_column'])]) | sklearn.modules.generated.sklearn.compose.make_column_transformer#sklearn.compose.make_column_transformer |
class sklearn.compose.TransformedTargetRegressor(regressor=None, *, transformer=None, func=None, inverse_func=None, check_inverse=True) [source]
Meta-estimator to regress on a transformed target. Useful for applying a non-linear transformation to the target y in regression problems. This transformation can be given as a Transformer such as the QuantileTransformer or as a function and its inverse such as log and exp. The computation during fit is: regressor.fit(X, func(y))
or: regressor.fit(X, transformer.transform(y))
The computation during predict is: inverse_func(regressor.predict(X))
or: transformer.inverse_transform(regressor.predict(X))
Read more in the User Guide. New in version 0.20. Parameters
regressorobject, default=None
Regressor object such as derived from RegressorMixin. This regressor will automatically be cloned each time prior to fitting. If regressor is None, LinearRegression() is created and used.
transformerobject, default=None
Estimator object such as derived from TransformerMixin. Cannot be set at the same time as func and inverse_func. If transformer is None as well as func and inverse_func, the transformer will be an identity transformer. Note that the transformer will be cloned during fitting. Also, the transformer is restricting y to be a numpy array.
funcfunction, default=None
Function to apply to y before passing to fit. Cannot be set at the same time as transformer. The function needs to return a 2-dimensional array. If func is None, the function used will be the identity function.
inverse_funcfunction, default=None
Function to apply to the prediction of the regressor. Cannot be set at the same time as transformer as well. The function needs to return a 2-dimensional array. The inverse function is used to return predictions to the same space of the original training labels.
check_inversebool, default=True
Whether to check that transform followed by inverse_transform or func followed by inverse_func leads to the original targets. Attributes
regressor_object
Fitted regressor.
transformer_object
Transformer used in fit and predict. Notes Internally, the target y is always converted into a 2-dimensional array to be used by scikit-learn transformers. At the time of prediction, the output will be reshaped to a have the same number of dimensions as y. See examples/compose/plot_transformed_target.py. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.compose import TransformedTargetRegressor
>>> tt = TransformedTargetRegressor(regressor=LinearRegression(),
... func=np.log, inverse_func=np.exp)
>>> X = np.arange(4).reshape(-1, 1)
>>> y = np.exp(2 * X).ravel()
>>> tt.fit(X, y)
TransformedTargetRegressor(...)
>>> tt.score(X, y)
1.0
>>> tt.regressor_.coef_
array([2.])
Methods
fit(X, y, **fit_params) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the base regressor, applying inverse.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, **fit_params) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
**fit_paramsdict
Parameters passed to the fit method of the underlying regressor. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the base regressor, applying inverse. The regressor is used to predict and the inverse_func or inverse_transform is applied before returning the prediction. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Samples. Returns
y_hatndarray of shape (n_samples,)
Predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor |
sklearn.compose.TransformedTargetRegressor
class sklearn.compose.TransformedTargetRegressor(regressor=None, *, transformer=None, func=None, inverse_func=None, check_inverse=True) [source]
Meta-estimator to regress on a transformed target. Useful for applying a non-linear transformation to the target y in regression problems. This transformation can be given as a Transformer such as the QuantileTransformer or as a function and its inverse such as log and exp. The computation during fit is: regressor.fit(X, func(y))
or: regressor.fit(X, transformer.transform(y))
The computation during predict is: inverse_func(regressor.predict(X))
or: transformer.inverse_transform(regressor.predict(X))
Read more in the User Guide. New in version 0.20. Parameters
regressorobject, default=None
Regressor object such as derived from RegressorMixin. This regressor will automatically be cloned each time prior to fitting. If regressor is None, LinearRegression() is created and used.
transformerobject, default=None
Estimator object such as derived from TransformerMixin. Cannot be set at the same time as func and inverse_func. If transformer is None as well as func and inverse_func, the transformer will be an identity transformer. Note that the transformer will be cloned during fitting. Also, the transformer is restricting y to be a numpy array.
funcfunction, default=None
Function to apply to y before passing to fit. Cannot be set at the same time as transformer. The function needs to return a 2-dimensional array. If func is None, the function used will be the identity function.
inverse_funcfunction, default=None
Function to apply to the prediction of the regressor. Cannot be set at the same time as transformer as well. The function needs to return a 2-dimensional array. The inverse function is used to return predictions to the same space of the original training labels.
check_inversebool, default=True
Whether to check that transform followed by inverse_transform or func followed by inverse_func leads to the original targets. Attributes
regressor_object
Fitted regressor.
transformer_object
Transformer used in fit and predict. Notes Internally, the target y is always converted into a 2-dimensional array to be used by scikit-learn transformers. At the time of prediction, the output will be reshaped to a have the same number of dimensions as y. See examples/compose/plot_transformed_target.py. Examples >>> import numpy as np
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.compose import TransformedTargetRegressor
>>> tt = TransformedTargetRegressor(regressor=LinearRegression(),
... func=np.log, inverse_func=np.exp)
>>> X = np.arange(4).reshape(-1, 1)
>>> y = np.exp(2 * X).ravel()
>>> tt.fit(X, y)
TransformedTargetRegressor(...)
>>> tt.score(X, y)
1.0
>>> tt.regressor_.coef_
array([2.])
Methods
fit(X, y, **fit_params) Fit the model according to the given training data.
get_params([deep]) Get parameters for this estimator.
predict(X) Predict using the base regressor, applying inverse.
score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction.
set_params(**params) Set the parameters of this estimator.
fit(X, y, **fit_params) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
**fit_paramsdict
Parameters passed to the fit method of the underlying regressor. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
predict(X) [source]
Predict using the base regressor, applying inverse. The regressor is used to predict and the inverse_func or inverse_transform is applied before returning the prediction. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Samples. Returns
y_hatndarray of shape (n_samples,)
Predicted values.
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.compose.TransformedTargetRegressor
Poisson regression and non-normal loss
Common pitfalls in interpretation of coefficients of linear models
Effect of transforming the targets in regression model | sklearn.modules.generated.sklearn.compose.transformedtargetregressor |
fit(X, y, **fit_params) [source]
Fit the model according to the given training data. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
yarray-like of shape (n_samples,)
Target values.
**fit_paramsdict
Parameters passed to the fit method of the underlying regressor. Returns
selfobject | sklearn.modules.generated.sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor.get_params |
predict(X) [source]
Predict using the base regressor, applying inverse. The regressor is used to predict and the inverse_func or inverse_transform is applied before returning the prediction. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Samples. Returns
y_hatndarray of shape (n_samples,)
Predicted values. | sklearn.modules.generated.sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor.predict |
score(X, y, sample_weight=None) [source]
Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)
** 2).sum() and \(v\) is the total sum of squares ((y_true -
y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True values for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
\(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). | sklearn.modules.generated.sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.compose.transformedtargetregressor#sklearn.compose.TransformedTargetRegressor.set_params |
sklearn.config_context(**new_config) [source]
Context manager for global scikit-learn configuration Parameters
assume_finitebool, default=False
If True, validation for finiteness will be skipped, saving time, but leading to potential crashes. If False, validation for finiteness will be performed, avoiding error. Global default: False.
working_memoryint, default=1024
If set, scikit-learn will attempt to limit the size of temporary arrays to this number of MiB (per job when parallelised), often saving both computation time and memory on expensive operations that can be performed in chunks. Global default: 1024.
print_changed_onlybool, default=True
If True, only the parameters that were set to non-default values will be printed when printing an estimator. For example, print(SVC()) while True will only print ‘SVC()’, but would print ‘SVC(C=1.0, cache_size=200, …)’ with all the non-changed parameters when False. Default is True. Changed in version 0.23: Default changed from False to True.
display{‘text’, ‘diagram’}, default=’text’
If ‘diagram’, estimators will be displayed as a diagram in a Jupyter lab or notebook context. If ‘text’, estimators will be displayed as text. Default is ‘text’. New in version 0.23. See also
set_config
Set global scikit-learn configuration.
get_config
Retrieve current values of the global configuration. Notes All settings, not just those presently modified, will be returned to their previous values when the context manager is exited. This is not thread-safe. Examples >>> import sklearn
>>> from sklearn.utils.validation import assert_all_finite
>>> with sklearn.config_context(assume_finite=True):
... assert_all_finite([float('nan')])
>>> with sklearn.config_context(assume_finite=True):
... with sklearn.config_context(assume_finite=False):
... assert_all_finite([float('nan')])
Traceback (most recent call last):
...
ValueError: Input contains NaN, ... | sklearn.modules.generated.sklearn.config_context#sklearn.config_context |
class sklearn.covariance.EllipticEnvelope(*, store_precision=True, assume_centered=False, support_fraction=None, contamination=0.1, random_state=None) [source]
An object for detecting outliers in a Gaussian distributed dataset. Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, the support of robust location and covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment.
support_fractionfloat, default=None
The proportion of points to be included in the support of the raw MCD estimate. If None, the minimum value of support_fraction will be used within the algorithm: [n_sample + n_features + 1] / 2. Range is (0, 1).
contaminationfloat, default=0.1
The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Range is (0, 0.5).
random_stateint, RandomState instance or None, default=None
Determines the pseudo random number generator for shuffling the data. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes
location_ndarray of shape (n_features,)
Estimated robust location.
covariance_ndarray of shape (n_features, n_features)
Estimated robust covariance matrix.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the robust estimates of location and shape.
offset_float
Offset used to define the decision function from the raw scores. We have the relation: decision_function = score_samples - offset_. The offset depends on the contamination parameter and is defined in such a way we obtain the expected number of outliers (samples with decision function < 0) in training. New in version 0.20.
raw_location_ndarray of shape (n_features,)
The raw robust estimated location before correction and re-weighting.
raw_covariance_ndarray of shape (n_features, n_features)
The raw robust estimated covariance before correction and re-weighting.
raw_support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and re-weighting.
dist_ndarray of shape (n_samples,)
Mahalanobis distances of the training set (on which fit is called) observations. See also
EmpiricalCovariance, MinCovDet
Notes Outlier detection from covariance estimation may break or not perform well in high-dimensional settings. In particular, one will always take care to work with n_samples > n_features ** 2. References
1
Rousseeuw, P.J., Van Driessen, K. “A fast algorithm for the minimum covariance determinant estimator” Technometrics 41(3), 212 (1999) Examples >>> import numpy as np
>>> from sklearn.covariance import EllipticEnvelope
>>> true_cov = np.array([[.8, .3],
... [.3, .4]])
>>> X = np.random.RandomState(0).multivariate_normal(mean=[0, 0],
... cov=true_cov,
... size=500)
>>> cov = EllipticEnvelope(random_state=0).fit(X)
>>> # predict returns 1 for an inlier and -1 for an outlier
>>> cov.predict([[0, 0],
... [3, 3]])
array([ 1, -1])
>>> cov.covariance_
array([[0.7411..., 0.2535...],
[0.2535..., 0.3053...]])
>>> cov.location_
array([0.0813... , 0.0427...])
Methods
correct_covariance(data) Apply a correction to raw Minimum Covariance Determinant estimates.
decision_function(X) Compute the decision function of the given observations.
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the EllipticEnvelope model.
fit_predict(X[, y]) Perform fit on X and returns labels for X.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
predict(X) Predict the labels (1 inlier, -1 outlier) of X according to the fitted model.
reweight_covariance(data) Re-weight raw Minimum Covariance Determinant estimates.
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
score_samples(X) Compute the negative Mahalanobis distances.
set_params(**params) Set the parameters of this estimator.
correct_covariance(data) [source]
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
decision_function(X) [source]
Compute the decision function of the given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
decisionndarray of shape (n_samples,)
Decision function of the samples. It is equal to the shifted Mahalanobis distances. The threshold for being an outlier is 0, which ensures a compatibility with other outlier detection algorithms.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the EllipticEnvelope model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
yIgnored
Not used, present for API consistency by convention.
fit_predict(X, y=None) [source]
Perform fit on X and returns labels for X. Returns -1 for outliers and 1 for inliers. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
yIgnored
Not used, present for API consistency by convention. Returns
yndarray of shape (n_samples,)
1 for inliers, -1 for outliers.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
predict(X) [source]
Predict the labels (1 inlier, -1 outlier) of X according to the fitted model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
is_inlierndarray of shape (n_samples,)
Returns -1 for anomalies/outliers and +1 for inliers.
reweight_covariance(data) [source]
Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
location_reweightedndarray of shape (n_features,)
Re-weighted robust location estimate.
covariance_reweightedndarray of shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweightedndarray of shape (n_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates. References
RVDriessen
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
score(X, y, sample_weight=None) [source]
Returns the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) w.r.t. y.
score_samples(X) [source]
Compute the negative Mahalanobis distances. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
negative_mahal_distancesarray-like of shape (n_samples,)
Opposite of the Mahalanobis distances.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope |
sklearn.covariance.EllipticEnvelope
class sklearn.covariance.EllipticEnvelope(*, store_precision=True, assume_centered=False, support_fraction=None, contamination=0.1, random_state=None) [source]
An object for detecting outliers in a Gaussian distributed dataset. Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, the support of robust location and covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment.
support_fractionfloat, default=None
The proportion of points to be included in the support of the raw MCD estimate. If None, the minimum value of support_fraction will be used within the algorithm: [n_sample + n_features + 1] / 2. Range is (0, 1).
contaminationfloat, default=0.1
The amount of contamination of the data set, i.e. the proportion of outliers in the data set. Range is (0, 0.5).
random_stateint, RandomState instance or None, default=None
Determines the pseudo random number generator for shuffling the data. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes
location_ndarray of shape (n_features,)
Estimated robust location.
covariance_ndarray of shape (n_features, n_features)
Estimated robust covariance matrix.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the robust estimates of location and shape.
offset_float
Offset used to define the decision function from the raw scores. We have the relation: decision_function = score_samples - offset_. The offset depends on the contamination parameter and is defined in such a way we obtain the expected number of outliers (samples with decision function < 0) in training. New in version 0.20.
raw_location_ndarray of shape (n_features,)
The raw robust estimated location before correction and re-weighting.
raw_covariance_ndarray of shape (n_features, n_features)
The raw robust estimated covariance before correction and re-weighting.
raw_support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and re-weighting.
dist_ndarray of shape (n_samples,)
Mahalanobis distances of the training set (on which fit is called) observations. See also
EmpiricalCovariance, MinCovDet
Notes Outlier detection from covariance estimation may break or not perform well in high-dimensional settings. In particular, one will always take care to work with n_samples > n_features ** 2. References
1
Rousseeuw, P.J., Van Driessen, K. “A fast algorithm for the minimum covariance determinant estimator” Technometrics 41(3), 212 (1999) Examples >>> import numpy as np
>>> from sklearn.covariance import EllipticEnvelope
>>> true_cov = np.array([[.8, .3],
... [.3, .4]])
>>> X = np.random.RandomState(0).multivariate_normal(mean=[0, 0],
... cov=true_cov,
... size=500)
>>> cov = EllipticEnvelope(random_state=0).fit(X)
>>> # predict returns 1 for an inlier and -1 for an outlier
>>> cov.predict([[0, 0],
... [3, 3]])
array([ 1, -1])
>>> cov.covariance_
array([[0.7411..., 0.2535...],
[0.2535..., 0.3053...]])
>>> cov.location_
array([0.0813... , 0.0427...])
Methods
correct_covariance(data) Apply a correction to raw Minimum Covariance Determinant estimates.
decision_function(X) Compute the decision function of the given observations.
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the EllipticEnvelope model.
fit_predict(X[, y]) Perform fit on X and returns labels for X.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
predict(X) Predict the labels (1 inlier, -1 outlier) of X according to the fitted model.
reweight_covariance(data) Re-weight raw Minimum Covariance Determinant estimates.
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
score_samples(X) Compute the negative Mahalanobis distances.
set_params(**params) Set the parameters of this estimator.
correct_covariance(data) [source]
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
decision_function(X) [source]
Compute the decision function of the given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
decisionndarray of shape (n_samples,)
Decision function of the samples. It is equal to the shifted Mahalanobis distances. The threshold for being an outlier is 0, which ensures a compatibility with other outlier detection algorithms.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the EllipticEnvelope model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
yIgnored
Not used, present for API consistency by convention.
fit_predict(X, y=None) [source]
Perform fit on X and returns labels for X. Returns -1 for outliers and 1 for inliers. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
yIgnored
Not used, present for API consistency by convention. Returns
yndarray of shape (n_samples,)
1 for inliers, -1 for outliers.
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
predict(X) [source]
Predict the labels (1 inlier, -1 outlier) of X according to the fitted model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
is_inlierndarray of shape (n_samples,)
Returns -1 for anomalies/outliers and +1 for inliers.
reweight_covariance(data) [source]
Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
location_reweightedndarray of shape (n_features,)
Re-weighted robust location estimate.
covariance_reweightedndarray of shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweightedndarray of shape (n_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates. References
RVDriessen
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
score(X, y, sample_weight=None) [source]
Returns the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) w.r.t. y.
score_samples(X) [source]
Compute the negative Mahalanobis distances. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
negative_mahal_distancesarray-like of shape (n_samples,)
Opposite of the Mahalanobis distances.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.EllipticEnvelope
Outlier detection on a real data set
Comparing anomaly detection algorithms for outlier detection on toy datasets | sklearn.modules.generated.sklearn.covariance.ellipticenvelope |
correct_covariance(data) [source]
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.correct_covariance |
decision_function(X) [source]
Compute the decision function of the given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
decisionndarray of shape (n_samples,)
Decision function of the samples. It is equal to the shifted Mahalanobis distances. The threshold for being an outlier is 0, which ensures a compatibility with other outlier detection algorithms. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.decision_function |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.error_norm |
fit(X, y=None) [source]
Fit the EllipticEnvelope model. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data.
yIgnored
Not used, present for API consistency by convention. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.fit |
fit_predict(X, y=None) [source]
Perform fit on X and returns labels for X. Returns -1 for outliers and 1 for inliers. Parameters
X{array-like, sparse matrix, dataframe} of shape (n_samples, n_features)
yIgnored
Not used, present for API consistency by convention. Returns
yndarray of shape (n_samples,)
1 for inliers, -1 for outliers. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.fit_predict |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.mahalanobis |
predict(X) [source]
Predict the labels (1 inlier, -1 outlier) of X according to the fitted model. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
is_inlierndarray of shape (n_samples,)
Returns -1 for anomalies/outliers and +1 for inliers. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.predict |
reweight_covariance(data) [source]
Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
location_reweightedndarray of shape (n_features,)
Re-weighted robust location estimate.
covariance_reweightedndarray of shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweightedndarray of shape (n_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates. References
RVDriessen
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.reweight_covariance |
score(X, y, sample_weight=None) [source]
Returns the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters
Xarray-like of shape (n_samples, n_features)
Test samples.
yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
Mean accuracy of self.predict(X) w.r.t. y. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.score |
score_samples(X) [source]
Compute the negative Mahalanobis distances. Parameters
Xarray-like of shape (n_samples, n_features)
The data matrix. Returns
negative_mahal_distancesarray-like of shape (n_samples,)
Opposite of the Mahalanobis distances. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.score_samples |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.ellipticenvelope#sklearn.covariance.EllipticEnvelope.set_params |
class sklearn.covariance.EmpiricalCovariance(*, store_precision=True, assume_centered=False) [source]
Maximum likelihood covariance estimator Read more in the User Guide. Parameters
store_precisionbool, default=True
Specifies if the estimated precision is stored.
assume_centeredbool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before computation. Attributes
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix
precision_ndarray of shape (n_features, n_features)
Estimated pseudo-inverse matrix. (stored only if store_precision is True) Examples >>> import numpy as np
>>> from sklearn.covariance import EmpiricalCovariance
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = EmpiricalCovariance().fit(X)
>>> cov.covariance_
array([[0.7569..., 0.2818...],
[0.2818..., 0.3928...]])
>>> cov.location_
array([0.0622..., 0.0193...])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits the Maximum Likelihood Estimator covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits the Maximum Likelihood Estimator covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance |
sklearn.covariance.EmpiricalCovariance
class sklearn.covariance.EmpiricalCovariance(*, store_precision=True, assume_centered=False) [source]
Maximum likelihood covariance estimator Read more in the User Guide. Parameters
store_precisionbool, default=True
Specifies if the estimated precision is stored.
assume_centeredbool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before computation. Attributes
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix
precision_ndarray of shape (n_features, n_features)
Estimated pseudo-inverse matrix. (stored only if store_precision is True) Examples >>> import numpy as np
>>> from sklearn.covariance import EmpiricalCovariance
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = EmpiricalCovariance().fit(X)
>>> cov.covariance_
array([[0.7569..., 0.2818...],
[0.2818..., 0.3928...]])
>>> cov.location_
array([0.0622..., 0.0193...])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits the Maximum Likelihood Estimator covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits the Maximum Likelihood Estimator covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.EmpiricalCovariance
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
Robust covariance estimation and Mahalanobis distances relevance
Robust vs Empirical covariance estimate | sklearn.modules.generated.sklearn.covariance.empiricalcovariance |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.error_norm |
fit(X, y=None) [source]
Fits the Maximum Likelihood Estimator covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.empiricalcovariance#sklearn.covariance.EmpiricalCovariance.set_params |
sklearn.covariance.empirical_covariance(X, *, assume_centered=False) [source]
Computes the Maximum likelihood covariance estimator Parameters
Xndarray of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data will be centered before computation. Returns
covariancendarray of shape (n_features, n_features)
Empirical covariance (Maximum Likelihood Estimator). Examples >>> from sklearn.covariance import empirical_covariance
>>> X = [[1,1,1],[1,1,1],[1,1,1],
... [0,0,0],[0,0,0],[0,0,0]]
>>> empirical_covariance(X)
array([[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25]]) | sklearn.modules.generated.sklearn.covariance.empirical_covariance#sklearn.covariance.empirical_covariance |
class sklearn.covariance.GraphicalLasso(alpha=0.01, *, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, assume_centered=False) [source]
Sparse inverse covariance estimation with an l1-penalized estimator. Read more in the User Guide. Changed in version v0.20: GraphLasso has been renamed to GraphicalLasso Parameters
alphafloat, default=0.01
The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf].
mode{‘cd’, ‘lars’}, default=’cd’
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.
tolfloat, default=1e-4
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].
enet_tolfloat, default=1e-4
The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].
max_iterint, default=100
The maximum number of iterations.
verbosebool, default=False
If verbose is True, the objective function and dual gap are plotted at each iteration.
assume_centeredbool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Attributes
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix.
n_iter_int
Number of iterations run. See also
graphical_lasso, GraphicalLassoCV
Examples >>> import numpy as np
>>> from sklearn.covariance import GraphicalLasso
>>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.2, 0.0, 0.3, 0.1],
... [0.0, 0.0, 0.1, 0.7]])
>>> np.random.seed(0)
>>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0],
... cov=true_cov,
... size=200)
>>> cov = GraphicalLasso().fit(X)
>>> np.around(cov.covariance_, decimals=3)
array([[0.816, 0.049, 0.218, 0.019],
[0.049, 0.364, 0.017, 0.034],
[0.218, 0.017, 0.322, 0.093],
[0.019, 0.034, 0.093, 0.69 ]])
>>> np.around(cov.location_, decimals=3)
array([0.073, 0.04 , 0.038, 0.143])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits the GraphicalLasso model to X.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits the GraphicalLasso model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso |
sklearn.covariance.GraphicalLasso
class sklearn.covariance.GraphicalLasso(alpha=0.01, *, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, assume_centered=False) [source]
Sparse inverse covariance estimation with an l1-penalized estimator. Read more in the User Guide. Changed in version v0.20: GraphLasso has been renamed to GraphicalLasso Parameters
alphafloat, default=0.01
The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf].
mode{‘cd’, ‘lars’}, default=’cd’
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.
tolfloat, default=1e-4
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].
enet_tolfloat, default=1e-4
The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].
max_iterint, default=100
The maximum number of iterations.
verbosebool, default=False
If verbose is True, the objective function and dual gap are plotted at each iteration.
assume_centeredbool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Attributes
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix.
n_iter_int
Number of iterations run. See also
graphical_lasso, GraphicalLassoCV
Examples >>> import numpy as np
>>> from sklearn.covariance import GraphicalLasso
>>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.2, 0.0, 0.3, 0.1],
... [0.0, 0.0, 0.1, 0.7]])
>>> np.random.seed(0)
>>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0],
... cov=true_cov,
... size=200)
>>> cov = GraphicalLasso().fit(X)
>>> np.around(cov.covariance_, decimals=3)
array([[0.816, 0.049, 0.218, 0.019],
[0.049, 0.364, 0.017, 0.034],
[0.218, 0.017, 0.322, 0.093],
[0.019, 0.034, 0.093, 0.69 ]])
>>> np.around(cov.location_, decimals=3)
array([0.073, 0.04 , 0.038, 0.143])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits the GraphicalLasso model to X.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits the GraphicalLasso model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.graphicallasso |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.error_norm |
fit(X, y=None) [source]
Fits the GraphicalLasso model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
yIgnored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.graphicallasso#sklearn.covariance.GraphicalLasso.set_params |
class sklearn.covariance.GraphicalLassoCV(*, alphas=4, n_refinements=4, cv=None, tol=0.0001, enet_tol=0.0001, max_iter=100, mode='cd', n_jobs=None, verbose=False, assume_centered=False) [source]
Sparse inverse covariance w/ cross-validated choice of the l1 penalty. See glossary entry for cross-validation estimator. Read more in the User Guide. Changed in version v0.20: GraphLassoCV has been renamed to GraphicalLassoCV Parameters
alphasint or array-like of shape (n_alphas,), dtype=float, default=4
If an integer is given, it fixes the number of points on the grids of alpha to be used. If a list is given, it gives the grid to be used. See the notes in the class docstring for more details. Range is (0, inf] when floats given.
n_refinementsint, default=4
The number of times the grid is refined. Not used if explicit values of alphas are passed. Range is [1, inf).
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.20: cv default value if None changed from 3-fold to 5-fold.
tolfloat, default=1e-4
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].
enet_tolfloat, default=1e-4
The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].
max_iterint, default=100
Maximum number of iterations.
mode{‘cd’, ‘lars’}, default=’cd’
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where number of features is greater than number of samples. Elsewhere prefer cd which is more numerically stable.
n_jobsint, default=None
number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None
verbosebool, default=False
If verbose is True, the objective function and duality gap are printed at each iteration.
assume_centeredbool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Attributes
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix.
precision_ndarray of shape (n_features, n_features)
Estimated precision matrix (inverse covariance).
alpha_float
Penalization parameter selected.
cv_alphas_list of shape (n_alphas,), dtype=float
All penalization parameters explored. Deprecated since version 0.24: The cv_alphas_ attribute is deprecated in version 0.24 in favor of cv_results_['alphas'] and will be removed in version 1.1 (renaming of 0.26).
grid_scores_ndarray of shape (n_alphas, n_folds)
Log-likelihood score on left-out data across folds. Deprecated since version 0.24: The grid_scores_ attribute is deprecated in version 0.24 in favor of cv_results_ and will be removed in version 1.1 (renaming of 0.26).
cv_results_dict of ndarrays
A dict with keys:
alphasndarray of shape (n_alphas,)
All penalization parameters explored.
split(k)_scorendarray of shape (n_alphas,)
Log-likelihood score on left-out data across (k)th fold.
mean_scorendarray of shape (n_alphas,)
Mean of scores over the folds.
std_scorendarray of shape (n_alphas,)
Standard deviation of scores over the folds. New in version 0.24.
n_iter_int
Number of iterations run for the optimal alpha. See also
graphical_lasso, GraphicalLasso
Notes The search for the optimal penalization parameter (alpha) is done on an iteratively refined grid: first the cross-validated scores on a grid are computed, then a new refined grid is centered around the maximum, and so on. One of the challenges which is faced here is that the solvers can fail to converge to a well-conditioned estimate. The corresponding values of alpha then come out as missing values, but the optimum may be close to these missing values. Examples >>> import numpy as np
>>> from sklearn.covariance import GraphicalLassoCV
>>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.2, 0.0, 0.3, 0.1],
... [0.0, 0.0, 0.1, 0.7]])
>>> np.random.seed(0)
>>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0],
... cov=true_cov,
... size=200)
>>> cov = GraphicalLassoCV().fit(X)
>>> np.around(cov.covariance_, decimals=3)
array([[0.816, 0.051, 0.22 , 0.017],
[0.051, 0.364, 0.018, 0.036],
[0.22 , 0.018, 0.322, 0.094],
[0.017, 0.036, 0.094, 0.69 ]])
>>> np.around(cov.location_, decimals=3)
array([0.073, 0.04 , 0.038, 0.143])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits the GraphicalLasso covariance model to X.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits the GraphicalLasso covariance model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV |
sklearn.covariance.GraphicalLassoCV
class sklearn.covariance.GraphicalLassoCV(*, alphas=4, n_refinements=4, cv=None, tol=0.0001, enet_tol=0.0001, max_iter=100, mode='cd', n_jobs=None, verbose=False, assume_centered=False) [source]
Sparse inverse covariance w/ cross-validated choice of the l1 penalty. See glossary entry for cross-validation estimator. Read more in the User Guide. Changed in version v0.20: GraphLassoCV has been renamed to GraphicalLassoCV Parameters
alphasint or array-like of shape (n_alphas,), dtype=float, default=4
If an integer is given, it fixes the number of points on the grids of alpha to be used. If a list is given, it gives the grid to be used. See the notes in the class docstring for more details. Range is (0, inf] when floats given.
n_refinementsint, default=4
The number of times the grid is refined. Not used if explicit values of alphas are passed. Range is [1, inf).
cvint, cross-validation generator or iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross-validation, integer, to specify the number of folds.
CV splitter, An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.20: cv default value if None changed from 3-fold to 5-fold.
tolfloat, default=1e-4
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].
enet_tolfloat, default=1e-4
The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].
max_iterint, default=100
Maximum number of iterations.
mode{‘cd’, ‘lars’}, default=’cd’
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where number of features is greater than number of samples. Elsewhere prefer cd which is more numerically stable.
n_jobsint, default=None
number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None
verbosebool, default=False
If verbose is True, the objective function and duality gap are printed at each iteration.
assume_centeredbool, default=False
If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data are centered before computation. Attributes
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix.
precision_ndarray of shape (n_features, n_features)
Estimated precision matrix (inverse covariance).
alpha_float
Penalization parameter selected.
cv_alphas_list of shape (n_alphas,), dtype=float
All penalization parameters explored. Deprecated since version 0.24: The cv_alphas_ attribute is deprecated in version 0.24 in favor of cv_results_['alphas'] and will be removed in version 1.1 (renaming of 0.26).
grid_scores_ndarray of shape (n_alphas, n_folds)
Log-likelihood score on left-out data across folds. Deprecated since version 0.24: The grid_scores_ attribute is deprecated in version 0.24 in favor of cv_results_ and will be removed in version 1.1 (renaming of 0.26).
cv_results_dict of ndarrays
A dict with keys:
alphasndarray of shape (n_alphas,)
All penalization parameters explored.
split(k)_scorendarray of shape (n_alphas,)
Log-likelihood score on left-out data across (k)th fold.
mean_scorendarray of shape (n_alphas,)
Mean of scores over the folds.
std_scorendarray of shape (n_alphas,)
Standard deviation of scores over the folds. New in version 0.24.
n_iter_int
Number of iterations run for the optimal alpha. See also
graphical_lasso, GraphicalLasso
Notes The search for the optimal penalization parameter (alpha) is done on an iteratively refined grid: first the cross-validated scores on a grid are computed, then a new refined grid is centered around the maximum, and so on. One of the challenges which is faced here is that the solvers can fail to converge to a well-conditioned estimate. The corresponding values of alpha then come out as missing values, but the optimum may be close to these missing values. Examples >>> import numpy as np
>>> from sklearn.covariance import GraphicalLassoCV
>>> true_cov = np.array([[0.8, 0.0, 0.2, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.2, 0.0, 0.3, 0.1],
... [0.0, 0.0, 0.1, 0.7]])
>>> np.random.seed(0)
>>> X = np.random.multivariate_normal(mean=[0, 0, 0, 0],
... cov=true_cov,
... size=200)
>>> cov = GraphicalLassoCV().fit(X)
>>> np.around(cov.covariance_, decimals=3)
array([[0.816, 0.051, 0.22 , 0.017],
[0.051, 0.364, 0.018, 0.036],
[0.22 , 0.018, 0.322, 0.094],
[0.017, 0.036, 0.094, 0.69 ]])
>>> np.around(cov.location_, decimals=3)
array([0.073, 0.04 , 0.038, 0.143])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits the GraphicalLasso covariance model to X.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits the GraphicalLasso covariance model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.GraphicalLassoCV
Sparse inverse covariance estimation
Visualizing the stock market structure | sklearn.modules.generated.sklearn.covariance.graphicallassocv |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.error_norm |
fit(X, y=None) [source]
Fits the GraphicalLasso covariance model to X. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
yIgnored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.graphicallassocv#sklearn.covariance.GraphicalLassoCV.set_params |
sklearn.covariance.graphical_lasso(emp_cov, alpha, *, cov_init=None, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, return_costs=False, eps=2.220446049250313e-16, return_n_iter=False) [source]
l1-penalized covariance estimator Read more in the User Guide. Changed in version v0.20: graph_lasso has been renamed to graphical_lasso Parameters
emp_covndarray of shape (n_features, n_features)
Empirical covariance from which to compute the covariance estimate.
alphafloat
The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf].
cov_initarray of shape (n_features, n_features), default=None
The initial guess for the covariance. If None, then the empirical covariance is used.
mode{‘cd’, ‘lars’}, default=’cd’
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.
tolfloat, default=1e-4
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].
enet_tolfloat, default=1e-4
The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].
max_iterint, default=100
The maximum number of iterations.
verbosebool, default=False
If verbose is True, the objective function and dual gap are printed at each iteration.
return_costsbool, default=Flase
If return_costs is True, the objective function and dual gap at each iteration are returned.
epsfloat, default=eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is np.finfo(np.float64).eps.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
covariancendarray of shape (n_features, n_features)
The estimated covariance matrix.
precisionndarray of shape (n_features, n_features)
The estimated (sparse) precision matrix.
costslist of (objective, dual_gap) pairs
The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True.
n_iterint
Number of iterations. Returned only if return_n_iter is set to True. See also
GraphicalLasso, GraphicalLassoCV
Notes The algorithm employed to solve this problem is the GLasso algorithm, from the Friedman 2008 Biostatistics paper. It is the same algorithm as in the R glasso package. One possible difference with the glasso R package is that the diagonal coefficients are not penalized. | sklearn.modules.generated.sklearn.covariance.graphical_lasso#sklearn.covariance.graphical_lasso |
class sklearn.covariance.LedoitWolf(*, store_precision=True, assume_centered=False, block_size=1000) [source]
LedoitWolf Estimator Ledoit-Wolf is a particular form of shrinkage, where the shrinkage coefficient is computed using O. Ledoit and M. Wolf’s formula as described in “A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411. Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data will be centered before computation.
block_sizeint, default=1000
Size of blocks into which the covariance matrix will be split during its Ledoit-Wolf estimation. This is purely a memory optimization and does not affect results. Attributes
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix.
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
shrinkage_float
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Notes The regularised covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features and shrinkage is given by the Ledoit and Wolf formula (see References) References “A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411. Examples >>> import numpy as np
>>> from sklearn.covariance import LedoitWolf
>>> real_cov = np.array([[.4, .2],
... [.2, .8]])
>>> np.random.seed(0)
>>> X = np.random.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=50)
>>> cov = LedoitWolf().fit(X)
>>> cov.covariance_
array([[0.4406..., 0.1616...],
[0.1616..., 0.8022...]])
>>> cov.location_
array([ 0.0595... , -0.0075...])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the Ledoit-Wolf shrunk covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the Ledoit-Wolf shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf |
sklearn.covariance.LedoitWolf
class sklearn.covariance.LedoitWolf(*, store_precision=True, assume_centered=False, block_size=1000) [source]
LedoitWolf Estimator Ledoit-Wolf is a particular form of shrinkage, where the shrinkage coefficient is computed using O. Ledoit and M. Wolf’s formula as described in “A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411. Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data will be centered before computation.
block_sizeint, default=1000
Size of blocks into which the covariance matrix will be split during its Ledoit-Wolf estimation. This is purely a memory optimization and does not affect results. Attributes
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix.
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
shrinkage_float
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Notes The regularised covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features and shrinkage is given by the Ledoit and Wolf formula (see References) References “A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices”, Ledoit and Wolf, Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411. Examples >>> import numpy as np
>>> from sklearn.covariance import LedoitWolf
>>> real_cov = np.array([[.4, .2],
... [.2, .8]])
>>> np.random.seed(0)
>>> X = np.random.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=50)
>>> cov = LedoitWolf().fit(X)
>>> cov.covariance_
array([[0.4406..., 0.1616...],
[0.1616..., 0.8022...]])
>>> cov.location_
array([ 0.0595... , -0.0075...])
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the Ledoit-Wolf shrunk covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the Ledoit-Wolf shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.LedoitWolf
Ledoit-Wolf vs OAS estimation
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood
Model selection with Probabilistic PCA and Factor Analysis (FA) | sklearn.modules.generated.sklearn.covariance.ledoitwolf |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.error_norm |
fit(X, y=None) [source]
Fit the Ledoit-Wolf shrunk covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.ledoitwolf#sklearn.covariance.LedoitWolf.set_params |
sklearn.covariance.ledoit_wolf(X, *, assume_centered=False, block_size=1000) [source]
Estimates the shrunk Ledoit-Wolf covariance matrix. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before computation.
block_sizeint, default=1000
Size of blocks into which the covariance matrix will be split. This is purely a memory optimization and does not affect results. Returns
shrunk_covndarray of shape (n_features, n_features)
Shrunk covariance.
shrinkagefloat
Coefficient in the convex combination used for the computation of the shrunk estimate. Notes The regularized (shrunk) covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features | sklearn.modules.generated.sklearn.covariance.ledoit_wolf#sklearn.covariance.ledoit_wolf |
class sklearn.covariance.MinCovDet(*, store_precision=True, assume_centered=False, support_fraction=None, random_state=None) [source]
Minimum Covariance Determinant (MCD): robust estimator of covariance. The Minimum Covariance Determinant covariance estimator is to be applied on Gaussian-distributed data, but could still be relevant on data drawn from a unimodal, symmetric distribution. It is not meant to be used with multi-modal data (the algorithm used to fit a MinCovDet object is likely to fail in such a case). One should consider projection pursuit methods to deal with multi-modal datasets. Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, the support of the robust location and the covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment.
support_fractionfloat, default=None
The proportion of points to be included in the support of the raw MCD estimate. Default is None, which implies that the minimum value of support_fraction will be used within the algorithm: (n_sample + n_features + 1) / 2. The parameter must be in the range (0, 1).
random_stateint, RandomState instance or None, default=None
Determines the pseudo random number generator for shuffling the data. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes
raw_location_ndarray of shape (n_features,)
The raw robust estimated location before correction and re-weighting.
raw_covariance_ndarray of shape (n_features, n_features)
The raw robust estimated covariance before correction and re-weighting.
raw_support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and re-weighting.
location_ndarray of shape (n_features,)
Estimated robust location.
covariance_ndarray of shape (n_features, n_features)
Estimated robust covariance matrix.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the robust estimates of location and shape.
dist_ndarray of shape (n_samples,)
Mahalanobis distances of the training set (on which fit is called) observations. References
Rouseeuw1984
P. J. Rousseeuw. Least median of squares regression. J. Am Stat Ass, 79:871, 1984.
Rousseeuw
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
ButlerDavies
R. W. Butler, P. L. Davies and M. Jhun, Asymptotics For The Minimum Covariance Determinant Estimator, The Annals of Statistics, 1993, Vol. 21, No. 3, 1385-1400 Examples >>> import numpy as np
>>> from sklearn.covariance import MinCovDet
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = MinCovDet(random_state=0).fit(X)
>>> cov.covariance_
array([[0.7411..., 0.2535...],
[0.2535..., 0.3053...]])
>>> cov.location_
array([0.0813... , 0.0427...])
Methods
correct_covariance(data) Apply a correction to raw Minimum Covariance Determinant estimates.
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits a Minimum Covariance Determinant with the FastMCD algorithm.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
reweight_covariance(data) Re-weight raw Minimum Covariance Determinant estimates.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
correct_covariance(data) [source]
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits a Minimum Covariance Determinant with the FastMCD algorithm. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
reweight_covariance(data) [source]
Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
location_reweightedndarray of shape (n_features,)
Re-weighted robust location estimate.
covariance_reweightedndarray of shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweightedndarray of shape (n_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates. References
RVDriessen
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet |
sklearn.covariance.MinCovDet
class sklearn.covariance.MinCovDet(*, store_precision=True, assume_centered=False, support_fraction=None, random_state=None) [source]
Minimum Covariance Determinant (MCD): robust estimator of covariance. The Minimum Covariance Determinant covariance estimator is to be applied on Gaussian-distributed data, but could still be relevant on data drawn from a unimodal, symmetric distribution. It is not meant to be used with multi-modal data (the algorithm used to fit a MinCovDet object is likely to fail in such a case). One should consider projection pursuit methods to deal with multi-modal datasets. Read more in the User Guide. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, the support of the robust location and the covariance estimates is computed, and a covariance estimate is recomputed from it, without centering the data. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, the robust location and covariance are directly computed with the FastMCD algorithm without additional treatment.
support_fractionfloat, default=None
The proportion of points to be included in the support of the raw MCD estimate. Default is None, which implies that the minimum value of support_fraction will be used within the algorithm: (n_sample + n_features + 1) / 2. The parameter must be in the range (0, 1).
random_stateint, RandomState instance or None, default=None
Determines the pseudo random number generator for shuffling the data. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes
raw_location_ndarray of shape (n_features,)
The raw robust estimated location before correction and re-weighting.
raw_covariance_ndarray of shape (n_features, n_features)
The raw robust estimated covariance before correction and re-weighting.
raw_support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the raw robust estimates of location and shape, before correction and re-weighting.
location_ndarray of shape (n_features,)
Estimated robust location.
covariance_ndarray of shape (n_features, n_features)
Estimated robust covariance matrix.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
support_ndarray of shape (n_samples,)
A mask of the observations that have been used to compute the robust estimates of location and shape.
dist_ndarray of shape (n_samples,)
Mahalanobis distances of the training set (on which fit is called) observations. References
Rouseeuw1984
P. J. Rousseeuw. Least median of squares regression. J. Am Stat Ass, 79:871, 1984.
Rousseeuw
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
ButlerDavies
R. W. Butler, P. L. Davies and M. Jhun, Asymptotics For The Minimum Covariance Determinant Estimator, The Annals of Statistics, 1993, Vol. 21, No. 3, 1385-1400 Examples >>> import numpy as np
>>> from sklearn.covariance import MinCovDet
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> cov = MinCovDet(random_state=0).fit(X)
>>> cov.covariance_
array([[0.7411..., 0.2535...],
[0.2535..., 0.3053...]])
>>> cov.location_
array([0.0813... , 0.0427...])
Methods
correct_covariance(data) Apply a correction to raw Minimum Covariance Determinant estimates.
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fits a Minimum Covariance Determinant with the FastMCD algorithm.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
reweight_covariance(data) Re-weight raw Minimum Covariance Determinant estimates.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
correct_covariance(data) [source]
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fits a Minimum Covariance Determinant with the FastMCD algorithm. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
reweight_covariance(data) [source]
Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
location_reweightedndarray of shape (n_features,)
Re-weighted robust location estimate.
covariance_reweightedndarray of shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweightedndarray of shape (n_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates. References
RVDriessen
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.MinCovDet
Robust covariance estimation and Mahalanobis distances relevance
Robust vs Empirical covariance estimate | sklearn.modules.generated.sklearn.covariance.mincovdet |
correct_covariance(data) [source]
Apply a correction to raw Minimum Covariance Determinant estimates. Correction using the empirical correction factor suggested by Rousseeuw and Van Driessen in [RVD]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
covariance_correctedndarray of shape (n_features, n_features)
Corrected robust covariance estimate. References
RVD
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.correct_covariance |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.error_norm |
fit(X, y=None) [source]
Fits a Minimum Covariance Determinant with the FastMCD algorithm. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features. y: Ignored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.mahalanobis |
reweight_covariance(data) [source]
Re-weight raw Minimum Covariance Determinant estimates. Re-weight observations using Rousseeuw’s method (equivalent to deleting outlying observations from the data set before computing location and covariance estimates) described in [RVDriessen]. Parameters
dataarray-like of shape (n_samples, n_features)
The data matrix, with p features and n samples. The data set must be the one which was used to compute the raw estimates. Returns
location_reweightedndarray of shape (n_features,)
Re-weighted robust location estimate.
covariance_reweightedndarray of shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweightedndarray of shape (n_samples,), dtype=bool
A mask of the observations that have been used to compute the re-weighted robust location and covariance estimates. References
RVDriessen
A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.reweight_covariance |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.score |
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.mincovdet#sklearn.covariance.MinCovDet.set_params |
class sklearn.covariance.OAS(*, store_precision=True, assume_centered=False) [source]
Oracle Approximating Shrinkage Estimator Read more in the User Guide. OAS is a particular form of shrinkage described in “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. The formula used here does not correspond to the one given in the article. In the original article, formula (23) states that 2/p is multiplied by Trace(cov*cov) in both the numerator and denominator, but this operation is omitted because for a large p, the value of 2/p is so small that it doesn’t affect the value of the estimator. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data will be centered before computation. Attributes
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix.
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
shrinkage_float
coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Notes The regularised covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features and shrinkage is given by the OAS formula (see References) References “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. Examples >>> import numpy as np
>>> from sklearn.covariance import OAS
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> oas = OAS().fit(X)
>>> oas.covariance_
array([[0.7533..., 0.2763...],
[0.2763..., 0.3964...]])
>>> oas.precision_
array([[ 1.7833..., -1.2431... ],
[-1.2431..., 3.3889...]])
>>> oas.shrinkage_
0.0195...
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the Oracle Approximating Shrinkage covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the Oracle Approximating Shrinkage covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS |
sklearn.covariance.oas(X, *, assume_centered=False) [source]
Estimate covariance with the Oracle Approximating Shrinkage algorithm. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before computation. Returns
shrunk_covarray-like of shape (n_features, n_features)
Shrunk covariance.
shrinkagefloat
Coefficient in the convex combination used for the computation of the shrunk estimate. Notes The regularised (shrunk) covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features The formula we used to implement the OAS is slightly modified compared to the one given in the article. See OAS for more details. | sklearn.modules.generated.oas-function#sklearn.covariance.oas |
sklearn.covariance.OAS
class sklearn.covariance.OAS(*, store_precision=True, assume_centered=False) [source]
Oracle Approximating Shrinkage Estimator Read more in the User Guide. OAS is a particular form of shrinkage described in “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. The formula used here does not correspond to the one given in the article. In the original article, formula (23) states that 2/p is multiplied by Trace(cov*cov) in both the numerator and denominator, but this operation is omitted because for a large p, the value of 2/p is so small that it doesn’t affect the value of the estimator. Parameters
store_precisionbool, default=True
Specify if the estimated precision is stored.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data will be centered before computation. Attributes
covariance_ndarray of shape (n_features, n_features)
Estimated covariance matrix.
location_ndarray of shape (n_features,)
Estimated location, i.e. the estimated mean.
precision_ndarray of shape (n_features, n_features)
Estimated pseudo inverse matrix. (stored only if store_precision is True)
shrinkage_float
coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Notes The regularised covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features and shrinkage is given by the OAS formula (see References) References “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. Examples >>> import numpy as np
>>> from sklearn.covariance import OAS
>>> from sklearn.datasets import make_gaussian_quantiles
>>> real_cov = np.array([[.8, .3],
... [.3, .4]])
>>> rng = np.random.RandomState(0)
>>> X = rng.multivariate_normal(mean=[0, 0],
... cov=real_cov,
... size=500)
>>> oas = OAS().fit(X)
>>> oas.covariance_
array([[0.7533..., 0.2763...],
[0.2763..., 0.3964...]])
>>> oas.precision_
array([[ 1.7833..., -1.2431... ],
[-1.2431..., 3.3889...]])
>>> oas.shrinkage_
0.0195...
Methods
error_norm(comp_cov[, norm, scaling, squared]) Computes the Mean Squared Error between two covariance estimators.
fit(X[, y]) Fit the Oracle Approximating Shrinkage covariance model according to the given training data and parameters.
get_params([deep]) Get parameters for this estimator.
get_precision() Getter for the precision matrix.
mahalanobis(X) Computes the squared Mahalanobis distances of given observations.
score(X_test[, y]) Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) Set the parameters of this estimator.
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators.
fit(X, y=None) [source]
Fit the Oracle Approximating Shrinkage covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values.
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object.
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations.
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix.
set_params(**params) [source]
Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters
**paramsdict
Estimator parameters. Returns
selfestimator instance
Estimator instance.
Examples using sklearn.covariance.OAS
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification
Ledoit-Wolf vs OAS estimation
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood | sklearn.modules.generated.sklearn.covariance.oas |
error_norm(comp_cov, norm='frobenius', scaling=True, squared=True) [source]
Computes the Mean Squared Error between two covariance estimators. (In the sense of the Frobenius norm). Parameters
comp_covarray-like of shape (n_features, n_features)
The covariance to compare with.
norm{“frobenius”, “spectral”}, default=”frobenius”
The type of norm used to compute the error. Available error types: - ‘frobenius’ (default): sqrt(tr(A^t.A)) - ‘spectral’: sqrt(max(eigenvalues(A^t.A)) where A is the error (comp_cov - self.covariance_).
scalingbool, default=True
If True (default), the squared error norm is divided by n_features. If False, the squared error norm is not rescaled.
squaredbool, default=True
Whether to compute the squared error norm or the error norm. If True (default), the squared error norm is returned. If False, the error norm is returned. Returns
resultfloat
The Mean Squared Error (in the sense of the Frobenius norm) between self and comp_cov covariance estimators. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.error_norm |
fit(X, y=None) [source]
Fit the Oracle Approximating Shrinkage covariance model according to the given training data and parameters. Parameters
Xarray-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
yIgnored
Not used, present for API consistency by convention. Returns
selfobject | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.fit |
get_params(deep=True) [source]
Get parameters for this estimator. Parameters
deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns
paramsdict
Parameter names mapped to their values. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.get_params |
get_precision() [source]
Getter for the precision matrix. Returns
precision_array-like of shape (n_features, n_features)
The precision matrix associated to the current covariance object. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.get_precision |
mahalanobis(X) [source]
Computes the squared Mahalanobis distances of given observations. Parameters
Xarray-like of shape (n_samples, n_features)
The observations, the Mahalanobis distances of the which we compute. Observations are assumed to be drawn from the same distribution than the data used in fit. Returns
distndarray of shape (n_samples,)
Squared Mahalanobis distances of the observations. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.mahalanobis |
score(X_test, y=None) [source]
Computes the log-likelihood of a Gaussian data set with self.covariance_ as an estimator of its covariance matrix. Parameters
X_testarray-like of shape (n_samples, n_features)
Test data of which we compute the likelihood, where n_samples is the number of samples and n_features is the number of features. X_test is assumed to be drawn from the same distribution than the data used in fit (including centering).
yIgnored
Not used, present for API consistency by convention. Returns
resfloat
The likelihood of the data set with self.covariance_ as an estimator of its covariance matrix. | sklearn.modules.generated.sklearn.covariance.oas#sklearn.covariance.OAS.score |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.