doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
sklearn.cluster.cluster_optics_dbscan
sklearn.cluster.cluster_optics_dbscan(*, reachability, core_distances, ordering, eps) [source]
Performs DBSCAN extraction for an arbitrary epsilon. Extracting the clusters runs in linear time. Note that this results in labels_ which are close to a DBSCAN with similar settings and eps, only if eps is close to max_eps. Parameters
reachabilityarray of shape (n_samples,)
Reachability distances calculated by OPTICS (reachability_)
core_distancesarray of shape (n_samples,)
Distances at which points become core (core_distances_)
orderingarray of shape (n_samples,)
OPTICS ordered point indices (ordering_)
epsfloat
DBSCAN eps parameter. Must be set to < max_eps. Results will be close to DBSCAN algorithm if eps and max_eps are close to one another. Returns
labels_array of shape (n_samples,)
The estimated labels.
Examples using sklearn.cluster.cluster_optics_dbscan
Demo of OPTICS clustering algorithm | sklearn.modules.generated.sklearn.cluster.cluster_optics_dbscan |
sklearn.cluster.cluster_optics_xi
sklearn.cluster.cluster_optics_xi(*, reachability, predecessor, ordering, min_samples, min_cluster_size=None, xi=0.05, predecessor_correction=True) [source]
Automatically extract clusters according to the Xi-steep method. Parameters
reachabilityndarray of shape (n_samples,)
Reachability distances calculated by OPTICS (reachability_)
predecessorndarray of shape (n_samples,)
Predecessors calculated by OPTICS.
orderingndarray of shape (n_samples,)
OPTICS ordered point indices (ordering_)
min_samplesint > 1 or float between 0 and 1
The same as the min_samples given to OPTICS. Up and down steep regions can’t have more then min_samples consecutive non-steep points. Expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2).
min_cluster_sizeint > 1 or float between 0 and 1, default=None
Minimum number of samples in an OPTICS cluster, expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2). If None, the value of min_samples is used instead.
xifloat between 0 and 1, default=0.05
Determines the minimum steepness on the reachability plot that constitutes a cluster boundary. For example, an upwards point in the reachability plot is defined by the ratio from one point to its successor being at most 1-xi.
predecessor_correctionbool, default=True
Correct clusters based on the calculated predecessors. Returns
labelsndarray of shape (n_samples,)
The labels assigned to samples. Points which are not included in any cluster are labeled as -1.
clustersndarray of shape (n_clusters, 2)
The list of clusters in the form of [start, end] in each row, with all indices inclusive. The clusters are ordered according to (end,
-start) (ascending) so that larger clusters encompassing smaller clusters come after such nested smaller clusters. Since labels does not reflect the hierarchy, usually len(clusters) >
np.unique(labels). | sklearn.modules.generated.sklearn.cluster.cluster_optics_xi |
sklearn.cluster.compute_optics_graph
sklearn.cluster.compute_optics_graph(X, *, min_samples, max_eps, metric, p, metric_params, algorithm, leaf_size, n_jobs) [source]
Computes the OPTICS reachability graph. Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features), or (n_samples, n_samples) if metric=’precomputed’.
A feature array, or array of distances between samples if metric=’precomputed’
min_samplesint > 1 or float between 0 and 1
The number of samples in a neighborhood for a point to be considered as a core point. Expressed as an absolute number or a fraction of the number of samples (rounded to be at least 2).
max_epsfloat, default=np.inf
The maximum distance between two samples for one to be considered as in the neighborhood of the other. Default value of np.inf will identify clusters across all scales; reducing max_eps will result in shorter run times.
metricstr or callable, default=’minkowski’
Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics.
pint, default=2
Parameter for the Minkowski metric from pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
metric_paramsdict, default=None
Additional keyword arguments for the metric function.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree
‘kd_tree’ will use KDTree
‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. (default) Note: fitting on sparse input will override the setting of this parameter, using brute force.
leaf_sizeint, default=30
Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Returns
ordering_array of shape (n_samples,)
The cluster ordered list of sample indices.
core_distances_array of shape (n_samples,)
Distance at which each sample becomes a core point, indexed by object order. Points which will never be core have a distance of inf. Use clust.core_distances_[clust.ordering_] to access in cluster order.
reachability_array of shape (n_samples,)
Reachability distances per sample, indexed by object order. Use clust.reachability_[clust.ordering_] to access in cluster order.
predecessor_array of shape (n_samples,)
Point that a sample was reached from, indexed by object order. Seed points have a predecessor of -1. References
1
Ankerst, Mihael, Markus M. Breunig, Hans-Peter Kriegel, and Jörg Sander. “OPTICS: ordering points to identify the clustering structure.” ACM SIGMOD Record 28, no. 2 (1999): 49-60. | sklearn.modules.generated.sklearn.cluster.compute_optics_graph |
sklearn.cluster.dbscan
sklearn.cluster.dbscan(X, eps=0.5, *, min_samples=5, metric='minkowski', metric_params=None, algorithm='auto', leaf_size=30, p=2, sample_weight=None, n_jobs=None) [source]
Perform DBSCAN clustering from vector array or distance matrix. Read more in the User Guide. Parameters
X{array-like, sparse (CSR) matrix} of shape (n_samples, n_features) or (n_samples, n_samples)
A feature array, or array of distances between samples if metric='precomputed'.
epsfloat, default=0.5
The maximum distance between two samples for one to be considered as in the neighborhood of the other. This is not a maximum bound on the distances of points within a cluster. This is the most important DBSCAN parameter to choose appropriately for your data set and distance function.
min_samplesint, default=5
The number of samples (or total weight) in a neighborhood for a point to be considered as a core point. This includes the point itself.
metricstr or callable, default=’minkowski’
The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by sklearn.metrics.pairwise_distances for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors.
metric_paramsdict, default=None
Additional keyword arguments for the metric function. New in version 0.19.
algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
The algorithm to be used by the NearestNeighbors module to compute pointwise distances and find nearest neighbors. See NearestNeighbors module documentation for details.
leaf_sizeint, default=30
Leaf size passed to BallTree or cKDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
pfloat, default=2
The power of the Minkowski metric to be used to calculate distance between points.
sample_weightarray-like of shape (n_samples,), default=None
Weight of each sample, such that a sample with a weight of at least min_samples is by itself a core sample; a sample with negative weight may inhibit its eps-neighbor from being core. Note that weights are absolute, and default to 1.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. If precomputed distance are used, parallel execution is not available and thus n_jobs will have no effect. Returns
core_samplesndarray of shape (n_core_samples,)
Indices of core samples.
labelsndarray of shape (n_samples,)
Cluster labels for each point. Noisy samples are given the label -1. See also
DBSCAN
An estimator interface for this clustering algorithm.
OPTICS
A similar estimator interface clustering at multiple values of eps. Our implementation is optimized for memory usage. Notes For an example, see examples/cluster/plot_dbscan.py. This implementation bulk-computes all neighborhood queries, which increases the memory complexity to O(n.d) where d is the average number of neighbors, while original DBSCAN had memory complexity O(n). It may attract a higher memory complexity when querying these nearest neighborhoods, depending on the algorithm. One way to avoid the query complexity is to pre-compute sparse neighborhoods in chunks using NearestNeighbors.radius_neighbors_graph with mode='distance', then using metric='precomputed' here. Another way to reduce memory and computation time is to remove (near-)duplicate points and use sample_weight instead. cluster.optics provides a similar clustering with lower memory usage. References Ester, M., H. P. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise”. In: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996 Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu, X. (2017). DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Transactions on Database Systems (TODS), 42(3), 19. | sklearn.modules.generated.dbscan-function |
sklearn.cluster.estimate_bandwidth
sklearn.cluster.estimate_bandwidth(X, *, quantile=0.3, n_samples=None, random_state=0, n_jobs=None) [source]
Estimate the bandwidth to use with the mean-shift algorithm. That this function takes time at least quadratic in n_samples. For large datasets, it’s wise to set that parameter to a small value. Parameters
Xarray-like of shape (n_samples, n_features)
Input points.
quantilefloat, default=0.3
should be between [0, 1] 0.5 means that the median of all pairwise distances is used.
n_samplesint, default=None
The number of samples to use. If not given, all samples are used.
random_stateint, RandomState instance, default=None
The generator used to randomly select the samples from input points for bandwidth estimation. Use an int to make the randomness deterministic. See Glossary.
n_jobsint, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Returns
bandwidthfloat
The bandwidth parameter.
Examples using sklearn.cluster.estimate_bandwidth
A demo of the mean-shift clustering algorithm
Comparing different clustering algorithms on toy datasets | sklearn.modules.generated.sklearn.cluster.estimate_bandwidth |
sklearn.cluster.kmeans_plusplus
sklearn.cluster.kmeans_plusplus(X, n_clusters, *, x_squared_norms=None, random_state=None, n_local_trials=None) [source]
Init n_clusters seeds according to k-means++ New in version 0.24. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The data to pick seeds from.
n_clustersint
The number of centroids to initialize
x_squared_normsarray-like of shape (n_samples,), default=None
Squared Euclidean norm of each data point.
random_stateint or RandomState instance, default=None
Determines random number generation for centroid initialization. Pass an int for reproducible output across multiple function calls. See Glossary.
n_local_trialsint, default=None
The number of seeding trials for each center (except the first), of which the one reducing inertia the most is greedily chosen. Set to None to make the number of trials depend logarithmically on the number of seeds (2+log(k)). Returns
centersndarray of shape (n_clusters, n_features)
The inital centers for k-means.
indicesndarray of shape (n_clusters,)
The index location of the chosen centers in the data array X. For a given index and center, X[index] = center. Notes Selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. see: Arthur, D. and Vassilvitskii, S. “k-means++: the advantages of careful seeding”. ACM-SIAM symposium on Discrete algorithms. 2007 Examples >>> from sklearn.cluster import kmeans_plusplus
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
... [10, 2], [10, 4], [10, 0]])
>>> centers, indices = kmeans_plusplus(X, n_clusters=2, random_state=0)
>>> centers
array([[10, 4],
[ 1, 0]])
>>> indices
array([4, 2])
Examples using sklearn.cluster.kmeans_plusplus
An example of K-Means++ initialization | sklearn.modules.generated.sklearn.cluster.kmeans_plusplus |
sklearn.cluster.k_means
sklearn.cluster.k_means(X, n_clusters, *, sample_weight=None, init='k-means++', precompute_distances='deprecated', n_init=10, max_iter=300, verbose=False, tol=0.0001, random_state=None, copy_x=True, n_jobs='deprecated', algorithm='auto', return_n_iter=False) [source]
K-means clustering algorithm. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
The observations to cluster. It must be noted that the data will be converted to C ordering, which will cause a memory copy if the given data is not C-contiguous.
n_clustersint
The number of clusters to form as well as the number of centroids to generate.
sample_weightarray-like of shape (n_samples,), default=None
The weights for each observation in X. If None, all observations are assigned equal weight.
init{‘k-means++’, ‘random’}, callable or array-like of shape (n_clusters, n_features), default=’k-means++’
Method for initialization: ‘k-means++’ : selects initial cluster centers for k-mean clustering in a smart way to speed up convergence. See section Notes in k_init for more details. ‘random’: choose n_clusters observations (rows) at random from data for the initial centroids. If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers. If a callable is passed, it should take arguments X, n_clusters and a random state and return an initialization.
precompute_distances{‘auto’, True, False}
Precompute distances (faster but takes more memory). ‘auto’ : do not precompute distances if n_samples * n_clusters > 12 million. This corresponds to about 100MB overhead per job using double precision. True : always precompute distances False : never precompute distances Deprecated since version 0.23: ‘precompute_distances’ was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25). It has no effect.
n_initint, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
max_iterint, default=300
Maximum number of iterations of the k-means algorithm to run.
verbosebool, default=False
Verbosity mode.
tolfloat, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference in the cluster centers of two consecutive iterations to declare convergence.
random_stateint, RandomState instance or None, default=None
Determines random number generation for centroid initialization. Use an int to make the randomness deterministic. See Glossary.
copy_xbool, default=True
When pre-computing distances it is more numerically accurate to center the data first. If copy_x is True (default), then the original data is not modified. If False, the original data is modified, and put back before the function returns, but small numerical differences may be introduced by subtracting and then adding the data mean. Note that if the original data is not C-contiguous, a copy will be made even if copy_x is False. If the original data is sparse, but not in CSR format, a copy will be made even if copy_x is False.
n_jobsint, default=None
The number of OpenMP threads to use for the computation. Parallelism is sample-wise on the main cython loop which assigns each sample to its closest center. None or -1 means using all processors. Deprecated since version 0.23: n_jobs was deprecated in version 0.23 and will be removed in 1.0 (renaming of 0.25).
algorithm{“auto”, “full”, “elkan”}, default=”auto”
K-means algorithm to use. The classical EM-style algorithm is “full”. The “elkan” variation is more efficient on data with well-defined clusters, by using the triangle inequality. However it’s more memory intensive due to the allocation of an extra array of shape (n_samples, n_clusters). For now “auto” (kept for backward compatibiliy) chooses “elkan” but it might change in the future for a better heuristic.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
centroidndarray of shape (n_clusters, n_features)
Centroids found at the last iteration of k-means.
labelndarray of shape (n_samples,)
label[i] is the code or index of the centroid the i’th observation is closest to.
inertiafloat
The final value of the inertia criterion (sum of squared distances to the closest centroid for all observations in the training set).
best_n_iterint
Number of iterations corresponding to the best results. Returned only if return_n_iter is set to True. | sklearn.modules.generated.sklearn.cluster.k_means |
sklearn.cluster.mean_shift
sklearn.cluster.mean_shift(X, *, bandwidth=None, seeds=None, bin_seeding=False, min_bin_freq=1, cluster_all=True, max_iter=300, n_jobs=None) [source]
Perform mean shift clustering of data using a flat kernel. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Input data.
bandwidthfloat, default=None
Kernel bandwidth. If bandwidth is not given, it is determined using a heuristic based on the median of all pairwise distances. This will take quadratic time in the number of samples. The sklearn.cluster.estimate_bandwidth function can be used to do this more efficiently.
seedsarray-like of shape (n_seeds, n_features) or None
Point used as initial kernel locations. If None and bin_seeding=False, each data point is used as a seed. If None and bin_seeding=True, see bin_seeding.
bin_seedingbool, default=False
If true, initial kernel locations are not locations of all points, but rather the location of the discretized version of points, where points are binned onto a grid whose coarseness corresponds to the bandwidth. Setting this option to True will speed up the algorithm because fewer seeds will be initialized. Ignored if seeds argument is not None.
min_bin_freqint, default=1
To speed up the algorithm, accept only those bins with at least min_bin_freq points as seeds.
cluster_allbool, default=True
If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1.
max_iterint, default=300
Maximum number of iterations, per seed point before the clustering operation terminates (for that seed point), if has not converged yet.
n_jobsint, default=None
The number of jobs to use for the computation. This works by computing each of the n_init runs in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. New in version 0.17: Parallel Execution using n_jobs. Returns
cluster_centersndarray of shape (n_clusters, n_features)
Coordinates of cluster centers.
labelsndarray of shape (n_samples,)
Cluster labels for each point. Notes For an example, see examples/cluster/plot_mean_shift.py. | sklearn.modules.generated.sklearn.cluster.mean_shift |
sklearn.cluster.spectral_clustering
sklearn.cluster.spectral_clustering(affinity, *, n_clusters=8, n_components=None, eigen_solver=None, random_state=None, n_init=10, eigen_tol=0.0, assign_labels='kmeans', verbose=False) [source]
Apply clustering to a projection of the normalized Laplacian. In practice Spectral Clustering is very useful when the structure of the individual clusters is highly non-convex or more generally when a measure of the center and spread of the cluster is not a suitable description of the complete cluster. For instance, when clusters are nested circles on the 2D plane. If affinity is the adjacency matrix of a graph, this method can be used to find normalized graph cuts. Read more in the User Guide. Parameters
affinity{array-like, sparse matrix} of shape (n_samples, n_samples)
The affinity matrix describing the relationship of the samples to embed. Must be symmetric. Possible examples:
adjacency matrix of a graph, heat kernel of the pairwise distance matrix of the samples, symmetric k-nearest neighbours connectivity matrix of the samples.
n_clustersint, default=None
Number of clusters to extract.
n_componentsint, default=n_clusters
Number of eigen vectors to use for the spectral embedding
eigen_solver{None, ‘arpack’, ‘lobpcg’, or ‘amg’}
The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used.
random_stateint, RandomState instance, default=None
A pseudo random number generator used for the initialization of the lobpcg eigen vectors decomposition when eigen_solver == ‘amg’ and by the K-Means initialization. Use an int to make the randomness deterministic. See Glossary.
n_initint, default=10
Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
eigen_tolfloat, default=0.0
Stopping criterion for eigendecomposition of the Laplacian matrix when using arpack eigen_solver.
assign_labels{‘kmeans’, ‘discretize’}, default=’kmeans’
The strategy to use to assign labels in the embedding space. There are two ways to assign labels after the laplacian embedding. k-means can be applied and is a popular choice. But it can also be sensitive to initialization. Discretization is another approach which is less sensitive to random initialization. See the ‘Multiclass spectral clustering’ paper referenced below for more details on the discretization approach.
verbosebool, default=False
Verbosity mode. New in version 0.24. Returns
labelsarray of integers, shape: n_samples
The labels of the clusters. Notes The graph should contain only one connect component, elsewhere the results make little sense. This algorithm solves the normalized cut for k=2: it is a normalized spectral clustering. References Normalized cuts and image segmentation, 2000 Jianbo Shi, Jitendra Malik http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.2324
A Tutorial on Spectral Clustering, 2007 Ulrike von Luxburg http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.165.9323
Multiclass spectral clustering, 2003 Stella X. Yu, Jianbo Shi https://www1.icsi.berkeley.edu/~stellayu/publication/doc/2003kwayICCV.pdf
Examples using sklearn.cluster.spectral_clustering
Segmenting the picture of greek coins in regions
Spectral clustering for image segmentation | sklearn.modules.generated.sklearn.cluster.spectral_clustering |
sklearn.cluster.ward_tree
sklearn.cluster.ward_tree(X, *, connectivity=None, n_clusters=None, return_distance=False) [source]
Ward clustering based on a Feature matrix. Recursively merges the pair of clusters that minimally increases within-cluster variance. The inertia matrix uses a Heapq-based representation. This is the structured version, that takes into account some topological structure between samples. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
feature matrix representing n_samples samples to be clustered
connectivitysparse matrix, default=None
connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. The matrix is assumed to be symmetric and only the upper triangular half is used. Default is None, i.e, the Ward algorithm is unstructured.
n_clustersint, default=None
Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. In this case, the complete tree is not computed, thus the ‘children’ output is of limited use, and the ‘parents’ output should rather be used. This option is valid only when specifying a connectivity matrix.
return_distancebool, default=None
If True, return the distance between the clusters. Returns
childrenndarray of shape (n_nodes-1, 2)
The children of each non-leaf node. Values less than n_samples correspond to leaves of the tree which are the original samples. A node i greater than or equal to n_samples is a non-leaf node and has children children_[i - n_samples]. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node n_samples + i
n_connected_componentsint
The number of connected components in the graph.
n_leavesint
The number of leaves in the tree
parentsndarray of shape (n_nodes,) or None
The parent of each node. Only returned when a connectivity matrix is specified, elsewhere ‘None’ is returned.
distancesndarray of shape (n_nodes-1,)
Only returned if return_distance is set to True (for compatibility). The distances between the centers of the nodes. distances[i] corresponds to a weighted euclidean distance between the nodes children[i, 1] and children[i, 2]. If the nodes refer to leaves of the tree, then distances[i] is their unweighted euclidean distance. Distances are updated in the following way (from scipy.hierarchy.linkage): The new entry \(d(u,v)\) is computed as follows, \[d(u,v) = \sqrt{\frac{|v|+|s|} {T}d(v,s)^2 + \frac{|v|+|t|} {T}d(v,t)^2 - \frac{|v|} {T}d(s,t)^2}\] where \(u\) is the newly joined cluster consisting of clusters \(s\) and \(t\), \(v\) is an unused cluster in the forest, \(T=|v|+|s|+|t|\), and \(|*|\) is the cardinality of its argument. This is also known as the incremental algorithm. | sklearn.modules.generated.sklearn.cluster.ward_tree |
sklearn.compose.make_column_selector
sklearn.compose.make_column_selector(pattern=None, *, dtype_include=None, dtype_exclude=None) [source]
Create a callable to select columns to be used with ColumnTransformer. make_column_selector can select columns based on datatype or the columns name with a regex. When using multiple selection criteria, all criteria must match for a column to be selected. Parameters
patternstr, default=None
Name of columns containing this regex pattern will be included. If None, column selection will not be selected based on pattern.
dtype_includecolumn dtype or list of column dtypes, default=None
A selection of dtypes to include. For more details, see pandas.DataFrame.select_dtypes.
dtype_excludecolumn dtype or list of column dtypes, default=None
A selection of dtypes to exclude. For more details, see pandas.DataFrame.select_dtypes. Returns
selectorcallable
Callable for column selection to be used by a ColumnTransformer. See also
ColumnTransformer
Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples >>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> from sklearn.compose import make_column_transformer
>>> from sklearn.compose import make_column_selector
>>> import pandas as pd
>>> X = pd.DataFrame({'city': ['London', 'London', 'Paris', 'Sallisaw'],
... 'rating': [5, 3, 4, 5]})
>>> ct = make_column_transformer(
... (StandardScaler(),
... make_column_selector(dtype_include=np.number)), # rating
... (OneHotEncoder(),
... make_column_selector(dtype_include=object))) # city
>>> ct.fit_transform(X)
array([[ 0.90453403, 1. , 0. , 0. ],
[-1.50755672, 1. , 0. , 0. ],
[-0.30151134, 0. , 1. , 0. ],
[ 0.90453403, 0. , 0. , 1. ]])
Examples using sklearn.compose.make_column_selector
Categorical Feature Support in Gradient Boosting
Column Transformer with Mixed Types | sklearn.modules.generated.sklearn.compose.make_column_selector |
sklearn.compose.make_column_transformer
sklearn.compose.make_column_transformer(*transformers, remainder='drop', sparse_threshold=0.3, n_jobs=None, verbose=False) [source]
Construct a ColumnTransformer from the given transformers. This is a shorthand for the ColumnTransformer constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names automatically based on their types. It also does not allow weighting with transformer_weights. Read more in the User Guide. Parameters
*transformerstuples
Tuples of the form (transformer, columns) specifying the transformer objects to be applied to subsets of the data.
transformer{‘drop’, ‘passthrough’} or estimator
Estimator must support fit and transform. Special-cased strings ‘drop’ and ‘passthrough’ are accepted as well, to indicate to drop the columns or to pass them through untransformed, respectively.
columnsstr, array-like of str, int, array-like of int, slice, array-like of bool or callable
Indexes the data on its second axis. Integers are interpreted as positional columns, while strings can reference DataFrame columns by name. A scalar string or int should be used where transformer expects X to be a 1d array-like (vector), otherwise a 2d array will be passed to the transformer. A callable is passed the input data X and can return any of the above. To select multiple columns by name or dtype, you can use make_column_selector.
remainder{‘drop’, ‘passthrough’} or estimator, default=’drop’
By default, only the specified columns in transformers are transformed and combined in the output, and the non-specified columns are dropped. (default of 'drop'). By specifying remainder='passthrough', all remaining columns that were not specified in transformers will be automatically passed through. This subset of columns is concatenated with the output of the transformers. By setting remainder to be an estimator, the remaining non-specified columns will use the remainder estimator. The estimator must support fit and transform.
sparse_thresholdfloat, default=0.3
If the transformed output consists of a mix of sparse and dense data, it will be stacked as a sparse matrix if the density is lower than this value. Use sparse_threshold=0 to always return dense. When the transformed output consists of all sparse or all dense data, the stacked result will be sparse or dense, respectively, and this keyword will be ignored.
n_jobsint, default=None
Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verbosebool, default=False
If True, the time elapsed while fitting each transformer will be printed as it is completed. Returns
ctColumnTransformer
See also
ColumnTransformer
Class that allows combining the outputs of multiple transformer objects used on column subsets of the data into a single feature space. Examples >>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> from sklearn.compose import make_column_transformer
>>> make_column_transformer(
... (StandardScaler(), ['numerical_column']),
... (OneHotEncoder(), ['categorical_column']))
ColumnTransformer(transformers=[('standardscaler', StandardScaler(...),
['numerical_column']),
('onehotencoder', OneHotEncoder(...),
['categorical_column'])])
Examples using sklearn.compose.make_column_transformer
Release Highlights for scikit-learn 0.23
Categorical Feature Support in Gradient Boosting
Combine predictors using stacking
Common pitfalls in interpretation of coefficients of linear models | sklearn.modules.generated.sklearn.compose.make_column_transformer |
sklearn.config_context
sklearn.config_context(**new_config) [source]
Context manager for global scikit-learn configuration Parameters
assume_finitebool, default=False
If True, validation for finiteness will be skipped, saving time, but leading to potential crashes. If False, validation for finiteness will be performed, avoiding error. Global default: False.
working_memoryint, default=1024
If set, scikit-learn will attempt to limit the size of temporary arrays to this number of MiB (per job when parallelised), often saving both computation time and memory on expensive operations that can be performed in chunks. Global default: 1024.
print_changed_onlybool, default=True
If True, only the parameters that were set to non-default values will be printed when printing an estimator. For example, print(SVC()) while True will only print ‘SVC()’, but would print ‘SVC(C=1.0, cache_size=200, …)’ with all the non-changed parameters when False. Default is True. Changed in version 0.23: Default changed from False to True.
display{‘text’, ‘diagram’}, default=’text’
If ‘diagram’, estimators will be displayed as a diagram in a Jupyter lab or notebook context. If ‘text’, estimators will be displayed as text. Default is ‘text’. New in version 0.23. See also
set_config
Set global scikit-learn configuration.
get_config
Retrieve current values of the global configuration. Notes All settings, not just those presently modified, will be returned to their previous values when the context manager is exited. This is not thread-safe. Examples >>> import sklearn
>>> from sklearn.utils.validation import assert_all_finite
>>> with sklearn.config_context(assume_finite=True):
... assert_all_finite([float('nan')])
>>> with sklearn.config_context(assume_finite=True):
... with sklearn.config_context(assume_finite=False):
... assert_all_finite([float('nan')])
Traceback (most recent call last):
...
ValueError: Input contains NaN, ... | sklearn.modules.generated.sklearn.config_context |
sklearn.covariance.empirical_covariance
sklearn.covariance.empirical_covariance(X, *, assume_centered=False) [source]
Computes the Maximum likelihood covariance estimator Parameters
Xndarray of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False, data will be centered before computation. Returns
covariancendarray of shape (n_features, n_features)
Empirical covariance (Maximum Likelihood Estimator). Examples >>> from sklearn.covariance import empirical_covariance
>>> X = [[1,1,1],[1,1,1],[1,1,1],
... [0,0,0],[0,0,0],[0,0,0]]
>>> empirical_covariance(X)
array([[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25]])
Examples using sklearn.covariance.empirical_covariance
Shrinkage covariance estimation: LedoitWolf vs OAS and max-likelihood | sklearn.modules.generated.sklearn.covariance.empirical_covariance |
sklearn.covariance.graphical_lasso
sklearn.covariance.graphical_lasso(emp_cov, alpha, *, cov_init=None, mode='cd', tol=0.0001, enet_tol=0.0001, max_iter=100, verbose=False, return_costs=False, eps=2.220446049250313e-16, return_n_iter=False) [source]
l1-penalized covariance estimator Read more in the User Guide. Changed in version v0.20: graph_lasso has been renamed to graphical_lasso Parameters
emp_covndarray of shape (n_features, n_features)
Empirical covariance from which to compute the covariance estimate.
alphafloat
The regularization parameter: the higher alpha, the more regularization, the sparser the inverse covariance. Range is (0, inf].
cov_initarray of shape (n_features, n_features), default=None
The initial guess for the covariance. If None, then the empirical covariance is used.
mode{‘cd’, ‘lars’}, default=’cd’
The Lasso solver to use: coordinate descent or LARS. Use LARS for very sparse underlying graphs, where p > n. Elsewhere prefer cd which is more numerically stable.
tolfloat, default=1e-4
The tolerance to declare convergence: if the dual gap goes below this value, iterations are stopped. Range is (0, inf].
enet_tolfloat, default=1e-4
The tolerance for the elastic net solver used to calculate the descent direction. This parameter controls the accuracy of the search direction for a given column update, not of the overall parameter estimate. Only used for mode=’cd’. Range is (0, inf].
max_iterint, default=100
The maximum number of iterations.
verbosebool, default=False
If verbose is True, the objective function and dual gap are printed at each iteration.
return_costsbool, default=Flase
If return_costs is True, the objective function and dual gap at each iteration are returned.
epsfloat, default=eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Default is np.finfo(np.float64).eps.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
covariancendarray of shape (n_features, n_features)
The estimated covariance matrix.
precisionndarray of shape (n_features, n_features)
The estimated (sparse) precision matrix.
costslist of (objective, dual_gap) pairs
The list of values of the objective function and the dual gap at each iteration. Returned only if return_costs is True.
n_iterint
Number of iterations. Returned only if return_n_iter is set to True. See also
GraphicalLasso, GraphicalLassoCV
Notes The algorithm employed to solve this problem is the GLasso algorithm, from the Friedman 2008 Biostatistics paper. It is the same algorithm as in the R glasso package. One possible difference with the glasso R package is that the diagonal coefficients are not penalized. | sklearn.modules.generated.sklearn.covariance.graphical_lasso |
sklearn.covariance.ledoit_wolf
sklearn.covariance.ledoit_wolf(X, *, assume_centered=False, block_size=1000) [source]
Estimates the shrunk Ledoit-Wolf covariance matrix. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before computation.
block_sizeint, default=1000
Size of blocks into which the covariance matrix will be split. This is purely a memory optimization and does not affect results. Returns
shrunk_covndarray of shape (n_features, n_features)
Shrunk covariance.
shrinkagefloat
Coefficient in the convex combination used for the computation of the shrunk estimate. Notes The regularized (shrunk) covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features
Examples using sklearn.covariance.ledoit_wolf
Sparse inverse covariance estimation | sklearn.modules.generated.sklearn.covariance.ledoit_wolf |
sklearn.covariance.oas
sklearn.covariance.oas(X, *, assume_centered=False) [source]
Estimate covariance with the Oracle Approximating Shrinkage algorithm. Parameters
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before computation. Returns
shrunk_covarray-like of shape (n_features, n_features)
Shrunk covariance.
shrinkagefloat
Coefficient in the convex combination used for the computation of the shrunk estimate. Notes The regularised (shrunk) covariance is: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features The formula we used to implement the OAS is slightly modified compared to the one given in the article. See OAS for more details. | sklearn.modules.generated.oas-function |
sklearn.covariance.shrunk_covariance
sklearn.covariance.shrunk_covariance(emp_cov, shrinkage=0.1) [source]
Calculates a covariance matrix shrunk on the diagonal Read more in the User Guide. Parameters
emp_covarray-like of shape (n_features, n_features)
Covariance matrix to be shrunk
shrinkagefloat, default=0.1
Coefficient in the convex combination used for the computation of the shrunk estimate. Range is [0, 1]. Returns
shrunk_covndarray of shape (n_features, n_features)
Shrunk covariance. Notes The regularized (shrunk) covariance is given by: (1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features) where mu = trace(cov) / n_features | sklearn.modules.generated.sklearn.covariance.shrunk_covariance |
sklearn.datasets.clear_data_home
sklearn.datasets.clear_data_home(data_home=None) [source]
Delete all the content of the data home cache. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data. | sklearn.modules.generated.sklearn.datasets.clear_data_home |
sklearn.datasets.dump_svmlight_file
sklearn.datasets.dump_svmlight_file(X, y, f, *, zero_based=True, comment=None, query_id=None, multilabel=False) [source]
Dump the dataset in svmlight / libsvm file format. This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The first element of each line can be used to store a target variable to predict. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features.
y{array-like, sparse matrix}, shape = [n_samples (, n_labels)]
Target values. Class labels must be an integer or float, or array-like objects of integer or float for multilabel classifications.
fstring or file-like in binary mode
If string, specifies the path that will contain the data. If file-like, data will be written to f. f should be opened in binary mode.
zero_basedboolean, default=True
Whether column indices should be written zero-based (True) or one-based (False).
commentstring, default=None
Comment to insert at the top of the file. This should be either a Unicode string, which will be encoded as UTF-8, or an ASCII byte string. If a comment is given, then it will be preceded by one that identifies the file as having been dumped by scikit-learn. Note that not all tools grok comments in SVMlight files.
query_idarray-like of shape (n_samples,), default=None
Array containing pairwise preference constraints (qid in svmlight format).
multilabelboolean, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html) New in version 0.17: parameter multilabel to support multilabel datasets.
Examples using sklearn.datasets.dump_svmlight_file
Libsvm GUI | sklearn.modules.generated.sklearn.datasets.dump_svmlight_file |
sklearn.datasets.fetch_20newsgroups
sklearn.datasets.fetch_20newsgroups(*, data_home=None, subset='train', categories=None, shuffle=True, random_state=42, remove=(), download_if_missing=True, return_X_y=False) [source]
Load the filenames and data from the 20 newsgroups dataset (classification). Download it if necessary.
Classes 20
Samples total 18846
Dimensionality 1
Features text Read more in the User Guide. Parameters
data_homestr, default=None
Specify a download and cache folder for the datasets. If None, all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
subset{‘train’, ‘test’, ‘all’}, default=’train’
Select the dataset to load: ‘train’ for the training set, ‘test’ for the test set, ‘all’ for both, with shuffled ordering.
categoriesarray-like, dtype=str or unicode, default=None
If None (default), load all the categories. If not None, list of category names to load (other categories ignored).
shufflebool, default=True
Whether or not to shuffle the data: might be important for models that make the assumption that the samples are independent and identically distributed (i.i.d.), such as stochastic gradient descent.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
removetuple, default=()
May contain any subset of (‘headers’, ‘footers’, ‘quotes’). Each of these are kinds of text that will be detected and removed from the newsgroup posts, preventing classifiers from overfitting on metadata. ‘headers’ removes newsgroup headers, ‘footers’ removes blocks at the ends of posts that look like signatures, and ‘quotes’ removes lines that appear to be quoting another post. ‘headers’ follows an exact standard; the other filters are not always correct.
download_if_missingbool, default=True
If False, raise an IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.22. Returns
bunchBunch
Dictionary-like object, with the following attributes.
datalist of shape (n_samples,)
The data list to learn. target: ndarray of shape (n_samples,)
The target labels. filenames: list of shape (n_samples,)
The path to the location of the data. DESCR: str
The full description of the dataset. target_names: list of shape (n_classes,)
The names of target classes.
(data, target)tuple if return_X_y=True
New in version 0.22.
Examples using sklearn.datasets.fetch_20newsgroups
Biclustering documents with the Spectral Co-clustering algorithm
Topic extraction with Non-negative Matrix Factorization and Latent Dirichlet Allocation
Sample pipeline for text feature extraction and evaluation
Column Transformer with Heterogeneous Data Sources
Semi-supervised Classification on a Text Dataset
FeatureHasher and DictVectorizer Comparison
Clustering text documents using k-means
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups |
sklearn.datasets.fetch_20newsgroups_vectorized
sklearn.datasets.fetch_20newsgroups_vectorized(*, subset='train', remove=(), data_home=None, download_if_missing=True, return_X_y=False, normalize=True, as_frame=False) [source]
Load and vectorize the 20 newsgroups dataset (classification). Download it if necessary. This is a convenience function; the transformation is done using the default settings for CountVectorizer. For more advanced usage (stopword filtering, n-gram extraction, etc.), combine fetch_20newsgroups with a custom CountVectorizer, HashingVectorizer, TfidfTransformer or TfidfVectorizer. The resulting counts are normalized using sklearn.preprocessing.normalize unless normalize is set to False.
Classes 20
Samples total 18846
Dimensionality 130107
Features real Read more in the User Guide. Parameters
subset{‘train’, ‘test’, ‘all’}, default=’train’
Select the dataset to load: ‘train’ for the training set, ‘test’ for the test set, ‘all’ for both, with shuffled ordering.
removetuple, default=()
May contain any subset of (‘headers’, ‘footers’, ‘quotes’). Each of these are kinds of text that will be detected and removed from the newsgroup posts, preventing classifiers from overfitting on metadata. ‘headers’ removes newsgroup headers, ‘footers’ removes blocks at the ends of posts that look like signatures, and ‘quotes’ removes lines that appear to be quoting another post.
data_homestr, default=None
Specify an download and cache folder for the datasets. If None, all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise an IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
normalizebool, default=True
If True, normalizes each document’s feature vector to unit norm using sklearn.preprocessing.normalize. New in version 0.22.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string, or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. New in version 0.24. Returns
bunchBunch
Dictionary-like object, with the following attributes. data: {sparse matrix, dataframe} of shape (n_samples, n_features)
The input data matrix. If as_frame is True, data is a pandas DataFrame with sparse columns. target: {ndarray, series} of shape (n_samples,)
The target labels. If as_frame is True, target is a pandas Series. target_names: list of shape (n_classes,)
The names of target classes. DESCR: str
The full description of the dataset. frame: dataframe of shape (n_samples, n_features + 1)
Only present when as_frame=True. Pandas DataFrame with data and target. New in version 0.24.
(data, target)tuple if return_X_y is True
data and target would be of the format defined in the Bunch description above. New in version 0.20.
Examples using sklearn.datasets.fetch_20newsgroups_vectorized
Model Complexity Influence
Multiclass sparse logistic regression on 20newgroups
The Johnson-Lindenstrauss bound for embedding with random projections | sklearn.modules.generated.sklearn.datasets.fetch_20newsgroups_vectorized |
sklearn.datasets.fetch_california_housing
sklearn.datasets.fetch_california_housing(*, data_home=None, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the California housing dataset (regression).
Samples total 20640
Dimensionality 8
Features real
Target real 0.15 - 5. Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False.
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. New in version 0.23. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datandarray, shape (20640, 8)
Each row corresponding to the 8 feature values in order. If as_frame is True, data is a pandas object.
targetnumpy array of shape (20640,)
Each value corresponds to the average house value in units of 100,000. If as_frame is True, target is a pandas object.
feature_nameslist of length 8
Array of ordered feature names used in the dataset.
DESCRstring
Description of the California housing dataset.
framepandas DataFrame
Only present when as_frame=True. DataFrame with data and target. New in version 0.23.
(data, target)tuple if return_X_y is True
New in version 0.20. Notes This dataset consists of 20,640 samples and 9 features.
Examples using sklearn.datasets.fetch_california_housing
Release Highlights for scikit-learn 0.24
Partial Dependence and Individual Conditional Expectation Plots
Imputing missing values with variants of IterativeImputer
Imputing missing values before building an estimator
Compare the effect of different scalers on data with outliers | sklearn.modules.generated.sklearn.datasets.fetch_california_housing |
sklearn.datasets.fetch_covtype
sklearn.datasets.fetch_covtype(*, data_home=None, download_if_missing=True, random_state=None, shuffle=False, return_X_y=False, as_frame=False) [source]
Load the covertype dataset (classification). Download it if necessary.
Classes 7
Samples total 581012
Dimensionality 54
Features int Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
shufflebool, default=False
Whether to shuffle dataset.
return_X_ybool, default=False
If True, returns (data.data, data.target) instead of a Bunch object. New in version 0.20.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.24. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datandarray of shape (581012, 54)
Each row corresponds to the 54 features in the dataset.
targetndarray of shape (581012,)
Each value corresponds to one of the 7 forest covertypes with values ranging between 1 to 7.
framedataframe of shape (581012, 53)
Only present when as_frame=True. Contains data and target.
DESCRstr
Description of the forest covertype dataset.
feature_nameslist
The names of the dataset columns. target_names: list
The names of the target columns.
(data, target)tuple if return_X_y is True
New in version 0.20.
Examples using sklearn.datasets.fetch_covtype
Release Highlights for scikit-learn 0.24
Scalable learning with polynomial kernel aproximation | sklearn.modules.generated.sklearn.datasets.fetch_covtype |
sklearn.datasets.fetch_kddcup99
sklearn.datasets.fetch_kddcup99(*, subset=None, data_home=None, shuffle=False, random_state=None, percent10=True, download_if_missing=True, return_X_y=False, as_frame=False) [source]
Load the kddcup99 dataset (classification). Download it if necessary.
Classes 23
Samples total 4898431
Dimensionality 41
Features discrete (int) or continuous (float) Read more in the User Guide. New in version 0.18. Parameters
subset{‘SA’, ‘SF’, ‘http’, ‘smtp’}, default=None
To return the corresponding classical subsets of kddcup 99. If None, return the entire kddcup 99 dataset.
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders. .. versionadded:: 0.19
shufflebool, default=False
Whether to shuffle dataset.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling and for selection of abnormal samples if subset='SA'. Pass an int for reproducible output across multiple function calls. See Glossary.
percent10bool, default=True
Whether to load only 10 percent of the data.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.20.
as_framebool, default=False
If True, returns a pandas Dataframe for the data and target objects in the Bunch returned object; Bunch return object will also have a frame member. New in version 0.24. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (494021, 41)
The data matrix to learn. If as_frame=True, data will be a pandas DataFrame.
target{ndarray, series} of shape (494021,)
The regression target for each sample. If as_frame=True, target will be a pandas Series.
framedataframe of shape (494021, 42)
Only present when as_frame=True. Contains data and target.
DESCRstr
The full description of the dataset.
feature_nameslist
The names of the dataset columns target_names: list
The names of the target columns
(data, target)tuple if return_X_y is True
New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_kddcup99 |
sklearn.datasets.fetch_lfw_pairs
sklearn.datasets.fetch_lfw_pairs(*, subset='train', data_home=None, funneled=True, resize=0.5, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True) [source]
Load the Labeled Faces in the Wild (LFW) pairs dataset (classification). Download it if necessary.
Classes 2
Samples total 13233
Dimensionality 5828
Features real, between 0 and 255 In the official README.txt this task is described as the “Restricted” task. As I am not sure as to implement the “Unrestricted” variant correctly, I left it as unsupported for now. The original images are 250 x 250 pixels, but the default slice and resize arguments reduce them to 62 x 47. Read more in the User Guide. Parameters
subset{‘train’, ‘test’, ‘10_folds’}, default=’train’
Select the dataset to load: ‘train’ for the development training set, ‘test’ for the development test set, and ‘10_folds’ for the official evaluation set that is meant to be used with a 10-folds cross validation.
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
funneledbool, default=True
Download and use the funneled variant of the dataset.
resizefloat, default=0.5
Ratio used to resize the each face picture.
colorbool, default=False
Keep the 3 RGB channels instead of averaging them to a single gray level channel. If color is True the shape of the data has one more dimension than the shape with color = False.
slice_tuple of slice, default=(slice(70, 195), slice(78, 172))
Provide a custom 2D slice (height, width) to extract the ‘interesting’ part of the jpeg files and avoid use statistical correlation from the background
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site. Returns
dataBunch
Dictionary-like object, with the following attributes.
datandarray of shape (2200, 5828). Shape depends on subset.
Each row corresponds to 2 ravel’d face images of original size 62 x 47 pixels. Changing the slice_, resize or subset parameters will change the shape of the output.
pairsndarray of shape (2200, 2, 62, 47). Shape depends on subset
Each row has 2 face images corresponding to same or different person from the dataset containing 5749 people. Changing the slice_, resize or subset parameters will change the shape of the output.
targetnumpy array of shape (2200,). Shape depends on subset.
Labels associated to each pair of images. The two label values being different persons or the same person.
DESCRstring
Description of the Labeled Faces in the Wild (LFW) dataset. | sklearn.modules.generated.sklearn.datasets.fetch_lfw_pairs |
sklearn.datasets.fetch_lfw_people
sklearn.datasets.fetch_lfw_people(*, data_home=None, funneled=True, resize=0.5, min_faces_per_person=0, color=False, slice_=slice(70, 195, None), slice(78, 172, None), download_if_missing=True, return_X_y=False) [source]
Load the Labeled Faces in the Wild (LFW) people dataset (classification). Download it if necessary.
Classes 5749
Samples total 13233
Dimensionality 5828
Features real, between 0 and 255 Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
funneledbool, default=True
Download and use the funneled variant of the dataset.
resizefloat, default=0.5
Ratio used to resize the each face picture.
min_faces_per_personint, default=None
The extracted dataset will only retain pictures of people that have at least min_faces_per_person different pictures.
colorbool, default=False
Keep the 3 RGB channels instead of averaging them to a single gray level channel. If color is True the shape of the data has one more dimension than the shape with color = False.
slice_tuple of slice, default=(slice(70, 195), slice(78, 172))
Provide a custom 2D slice (height, width) to extract the ‘interesting’ part of the jpeg files and avoid use statistical correlation from the background
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (dataset.data, dataset.target) instead of a Bunch object. See below for more information about the dataset.data and dataset.target object. New in version 0.20. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datanumpy array of shape (13233, 2914)
Each row corresponds to a ravelled face image of original size 62 x 47 pixels. Changing the slice_ or resize parameters will change the shape of the output.
imagesnumpy array of shape (13233, 62, 47)
Each row is a face image corresponding to one of the 5749 people in the dataset. Changing the slice_ or resize parameters will change the shape of the output.
targetnumpy array of shape (13233,)
Labels associated to each face image. Those labels range from 0-5748 and correspond to the person IDs.
DESCRstring
Description of the Labeled Faces in the Wild (LFW) dataset.
(data, target)tuple if return_X_y is True
New in version 0.20.
Examples using sklearn.datasets.fetch_lfw_people
Faces recognition example using eigenfaces and SVMs | sklearn.modules.generated.sklearn.datasets.fetch_lfw_people |
sklearn.datasets.fetch_olivetti_faces
sklearn.datasets.fetch_olivetti_faces(*, data_home=None, shuffle=False, random_state=0, download_if_missing=True, return_X_y=False) [source]
Load the Olivetti faces data-set from AT&T (classification). Download it if necessary.
Classes 40
Samples total 400
Dimensionality 4096
Features real, between 0 and 1 Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
shufflebool, default=False
If True the order of the dataset is shuffled to avoid having images of the same person grouped.
random_stateint, RandomState instance or None, default=0
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.22. Returns
dataBunch
Dictionary-like object, with the following attributes. data: ndarray, shape (400, 4096)
Each row corresponds to a ravelled face image of original size 64 x 64 pixels.
imagesndarray, shape (400, 64, 64)
Each row is a face image corresponding to one of the 40 subjects of the dataset.
targetndarray, shape (400,)
Labels associated to each face image. Those labels are ranging from 0-39 and correspond to the Subject IDs.
DESCRstr
Description of the modified Olivetti Faces Dataset.
(data, target)tuple if return_X_y=True
New in version 0.22.
Examples using sklearn.datasets.fetch_olivetti_faces
Online learning of a dictionary of parts of faces
Faces dataset decompositions
Pixel importances with a parallel forest of trees
Face completion with a multi-output estimators | sklearn.modules.generated.sklearn.datasets.fetch_olivetti_faces |
sklearn.datasets.fetch_openml
sklearn.datasets.fetch_openml(name: Optional[str] = None, *, version: Union[str, int] = 'active', data_id: Optional[int] = None, data_home: Optional[str] = None, target_column: Optional[Union[str, List]] = 'default-target', cache: bool = True, return_X_y: bool = False, as_frame: Union[str, bool] = 'auto') [source]
Fetch dataset from openml by name or dataset id. Datasets are uniquely identified by either an integer ID or by a combination of name and version (i.e. there might be multiple versions of the ‘iris’ dataset). Please give either name or data_id (not both). In case a name is given, a version can also be provided. Read more in the User Guide. New in version 0.20. Note EXPERIMENTAL The API is experimental (particularly the return value structure), and might have small backward-incompatible changes without notice or warning in future releases. Parameters
namestr, default=None
String identifier of the dataset. Note that OpenML can have multiple datasets with the same name.
versionint or ‘active’, default=’active’
Version of the dataset. Can only be provided if also name is given. If ‘active’ the oldest version that’s still active is used. Since there may be more than one active version of a dataset, and those versions may fundamentally be different from one another, setting an exact version is highly recommended.
data_idint, default=None
OpenML ID of the dataset. The most specific way of retrieving a dataset. If data_id is not given, name (and potential version) are used to obtain a dataset.
data_homestr, default=None
Specify another download and cache folder for the data sets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
target_columnstr, list or None, default=’default-target’
Specify the column name in the data to use as target. If ‘default-target’, the standard target column a stored on the server is used. If None, all columns are returned as data and the target is None. If list (of strings), all columns with these names are returned as multi-target (Note: not all scikit-learn classifiers can handle all types of multi-output combinations)
cachebool, default=True
Whether to cache downloaded datasets using joblib.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target objects.
as_framebool or ‘auto’, default=’auto’
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target_columns. The Bunch will contain a frame attribute with the target and the data. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as describe above. If as_frame is ‘auto’, the data and target will be converted to DataFrame or Series as if as_frame is set to True, unless the dataset is stored in sparse format. Changed in version 0.24: The default value of as_frame changed from False to 'auto' in 0.24. Returns
dataBunch
Dictionary-like object, with the following attributes.
datanp.array, scipy.sparse.csr_matrix of floats, or pandas DataFrame
The feature matrix. Categorical features are encoded as ordinals.
targetnp.array, pandas Series or DataFrame
The regression target or classification labels, if applicable. Dtype is float if numeric, and object if categorical. If as_frame is True, target is a pandas object.
DESCRstr
The full description of the dataset
feature_nameslist
The names of the dataset columns target_names: list
The names of the target columns New in version 0.22.
categoriesdict or None
Maps each categorical feature name to a list of values, such that the value encoded as i is ith in the list. If as_frame is True, this is None.
detailsdict
More metadata from OpenML
framepandas DataFrame
Only present when as_frame=True. DataFrame with data and target.
(data, target)tuple if return_X_y is True
Note EXPERIMENTAL This interface is experimental and subsequent releases may change attributes without notice (although there should only be minor changes to data and target). Missing values in the ‘data’ are represented as NaN’s. Missing values in ‘target’ are represented as NaN’s (numerical target) or None (categorical target)
Examples using sklearn.datasets.fetch_openml
Release Highlights for scikit-learn 0.22
Categorical Feature Support in Gradient Boosting
Combine predictors using stacking
Gaussian process regression (GPR) on Mauna Loa CO2 data.
MNIST classification using multinomial logistic + L1
Early stopping of Stochastic Gradient Descent
Poisson regression and non-normal loss
Tweedie regression on insurance claims
Permutation Importance vs Random Forest Feature Importance (MDI)
Common pitfalls in interpretation of coefficients of linear models
Visualizations with Display Objects
Classifier Chain
Approximate nearest neighbors in TSNE
Visualization of MLP weights on MNIST
Column Transformer with Mixed Types
Effect of transforming the targets in regression model | sklearn.modules.generated.sklearn.datasets.fetch_openml |
sklearn.datasets.fetch_rcv1
sklearn.datasets.fetch_rcv1(*, data_home=None, subset='all', download_if_missing=True, random_state=None, shuffle=False, return_X_y=False) [source]
Load the RCV1 multilabel dataset (classification). Download it if necessary. Version: RCV1-v2, vectors, full sets, topics multilabels.
Classes 103
Samples total 804414
Dimensionality 47236
Features real, between 0 and 1 Read more in the User Guide. New in version 0.17. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
subset{‘train’, ‘test’, ‘all’}, default=’all’
Select the dataset to load: ‘train’ for the training set (23149 samples), ‘test’ for the test set (781265 samples), ‘all’ for both, with the training samples first if shuffle is False. This follows the official LYRL2004 chronological split.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary.
shufflebool, default=False
Whether to shuffle dataset.
return_X_ybool, default=False
If True, returns (dataset.data, dataset.target) instead of a Bunch object. See below for more information about the dataset.data and dataset.target object. New in version 0.20. Returns
datasetBunch
Dictionary-like object, with the following attributes.
datasparse matrix of shape (804414, 47236), dtype=np.float64
The array has 0.16% of non zero values. Will be of CSR format.
targetsparse matrix of shape (804414, 103), dtype=np.uint8
Each sample has a value of 1 in its categories, and 0 in others. The array has 3.15% of non zero values. Will be of CSR format.
sample_idndarray of shape (804414,), dtype=np.uint32,
Identification number of each sample, as ordered in dataset.data.
target_namesndarray of shape (103,), dtype=object
Names of each target (RCV1 topics), as ordered in dataset.target.
DESCRstr
Description of the RCV1 dataset.
(data, target)tuple if return_X_y is True
New in version 0.20. | sklearn.modules.generated.sklearn.datasets.fetch_rcv1 |
sklearn.datasets.fetch_species_distributions
sklearn.datasets.fetch_species_distributions(*, data_home=None, download_if_missing=True) [source]
Loader for species distribution dataset from Phillips et. al. (2006) Read more in the User Guide. Parameters
data_homestr, default=None
Specify another download and cache folder for the datasets. By default all scikit-learn data is stored in ‘~/scikit_learn_data’ subfolders.
download_if_missingbool, default=True
If False, raise a IOError if the data is not locally available instead of trying to download the data from the source site. Returns
dataBunch
Dictionary-like object, with the following attributes.
coveragesarray, shape = [14, 1592, 1212]
These represent the 14 features measured at each point of the map grid. The latitude/longitude values for the grid are discussed below. Missing data is represented by the value -9999.
trainrecord array, shape = (1624,)
The training points for the data. Each point has three fields: train[‘species’] is the species name train[‘dd long’] is the longitude, in degrees train[‘dd lat’] is the latitude, in degrees
testrecord array, shape = (620,)
The test points for the data. Same format as the training data.
Nx, Nyintegers
The number of longitudes (x) and latitudes (y) in the grid
x_left_lower_corner, y_left_lower_cornerfloats
The (x,y) position of the lower-left corner, in degrees
grid_sizefloat
The spacing between points of the grid, in degrees Notes This dataset represents the geographic distribution of species. The dataset is provided by Phillips et. al. (2006). The two species are:
“Bradypus variegatus” , the Brown-throated Sloth.
“Microryzomys minutus” , also known as the Forest Small Rice Rat, a rodent that lives in Peru, Colombia, Ecuador, Peru, and Venezuela. For an example of using this dataset with scikit-learn, see examples/applications/plot_species_distribution_modeling.py. References
“Maximum entropy modeling of species geographic distributions” S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling, 190:231-259, 2006.
Examples using sklearn.datasets.fetch_species_distributions
Species distribution modeling
Kernel Density Estimate of Species Distributions | sklearn.modules.generated.sklearn.datasets.fetch_species_distributions |
sklearn.datasets.get_data_home
sklearn.datasets.get_data_home(data_home=None) → str[source]
Return the path of the scikit-learn data dir. This folder is used by some large dataset loaders to avoid downloading the data several times. By default the data dir is set to a folder named ‘scikit_learn_data’ in the user home folder. Alternatively, it can be set by the ‘SCIKIT_LEARN_DATA’ environment variable or programmatically by giving an explicit folder path. The ‘~’ symbol is expanded to the user home folder. If the folder does not already exist, it is automatically created. Parameters
data_homestr, default=None
The path to scikit-learn data directory. If None, the default path is ~/sklearn_learn_data.
Examples using sklearn.datasets.get_data_home
Out-of-core classification of text documents | sklearn.modules.generated.sklearn.datasets.get_data_home |
sklearn.datasets.load_boston
sklearn.datasets.load_boston(*, return_X_y=False) [source]
Load and return the boston house-prices dataset (regression).
Samples total 506
Dimensionality 13
Features real, positive
Targets real 5. - 50. Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18. Returns
dataBunch
Dictionary-like object, with the following attributes.
datandarray of shape (506, 13)
The data matrix.
targetndarray of shape (506, )
The regression target.
filenamestr
The physical location of boston csv dataset. New in version 0.20.
DESCRstr
The full description of the dataset.
feature_namesndarray
The names of features
(data, target)tuple if return_X_y is True
New in version 0.18. Notes Changed in version 0.20: Fixed a wrong data point at [445, 0]. Examples >>> from sklearn.datasets import load_boston
>>> X, y = load_boston(return_X_y=True)
>>> print(X.shape)
(506, 13) | sklearn.modules.generated.sklearn.datasets.load_boston |
sklearn.datasets.load_breast_cancer
sklearn.datasets.load_breast_cancer(*, return_X_y=False, as_frame=False) [source]
Load and return the breast cancer wisconsin dataset (classification). The breast cancer dataset is a classic and very easy binary classification dataset.
Classes 2
Samples per class 212(M),357(B)
Samples total 569
Dimensionality 30
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (569, 30)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (569,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (569, 31)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. filename: str
The path to the location of the data. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. The copy of UCI ML Breast Cancer Wisconsin (Diagnostic) dataset is
downloaded from:
https://goo.gl/U2Uwz2
Examples Let’s say you are interested in the samples 10, 50, and 85, and want to know their class name. >>> from sklearn.datasets import load_breast_cancer
>>> data = load_breast_cancer()
>>> data.target[[10, 50, 85]]
array([0, 1, 0])
>>> list(data.target_names)
['malignant', 'benign']
Examples using sklearn.datasets.load_breast_cancer
Post pruning decision trees with cost complexity pruning
Permutation Importance with Multicollinear or Correlated Features
Effect of varying threshold for self-training | sklearn.modules.generated.sklearn.datasets.load_breast_cancer |
sklearn.datasets.load_diabetes
sklearn.datasets.load_diabetes(*, return_X_y=False, as_frame=False) [source]
Load and return the diabetes dataset (regression).
Samples total 442
Dimensionality 10
Features real, -.2 < x < .2
Targets integer 25 - 346 Read more in the User Guide. Parameters
return_X_ybool, default=False.
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (442, 10)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (442,)
The regression target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. frame: DataFrame of shape (442, 11)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. data_filename: str
The path to the location of the data. target_filename: str
The path to the location of the target.
(data, target)tuple if return_X_y is True
New in version 0.18.
Examples using sklearn.datasets.load_diabetes
Plot individual and voting regression predictions
Gradient Boosting regression
Model Complexity Influence
Model-based and sequential feature selection
Lasso path using LARS
Linear Regression Example
Sparsity Example: Fitting only features 1 and 2
Lasso and Elastic Net
Lasso model selection: Cross-Validation / AIC / BIC
Advanced Plotting With Partial Dependence
Imputing missing values before building an estimator
Plotting Cross-Validated Predictions
Cross-validation on diabetes Dataset Exercise | sklearn.modules.generated.sklearn.datasets.load_diabetes |
sklearn.datasets.load_digits
sklearn.datasets.load_digits(*, n_class=10, return_X_y=False, as_frame=False) [source]
Load and return the digits dataset (classification). Each datapoint is a 8x8 image of a digit.
Classes 10
Samples per class ~180
Samples total 1797
Dimensionality 64
Features integers 0-16 Read more in the User Guide. Parameters
n_classint, default=10
The number of classes to return. Between 0 and 10.
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (1797, 64)
The flattened data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (1797,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. New in version 0.20. frame: DataFrame of shape (1797, 65)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. images: {ndarray} of shape (1797, 8, 8)
The raw image data. DESCR: str
The full description of the dataset.
(data, target)tuple if return_X_y is True
New in version 0.18. This is a copy of the test set of the UCI ML hand-written digits datasets
https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
Examples To load the data and visualize the images: >>> from sklearn.datasets import load_digits
>>> digits = load_digits()
>>> print(digits.data.shape)
(1797, 64)
>>> import matplotlib.pyplot as plt
>>> plt.gray()
>>> plt.matshow(digits.images[0])
>>> plt.show()
Examples using sklearn.datasets.load_digits
Recognizing hand-written digits
Feature agglomeration
Various Agglomerative Clustering on a 2D embedding of digits
A demo of K-Means clustering on the handwritten digits data
The Digit Dataset
Early stopping of Gradient Boosting
Recursive feature elimination
Comparing various online solvers
L1 Penalty and Sparsity in Logistic Regression
Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…
The Johnson-Lindenstrauss bound for embedding with random projections
Explicit feature map approximation for RBF kernels
Plotting Validation Curves
Parameter estimation using grid search with cross-validation
Comparing randomized search and grid search for hyperparameter estimation
Balance model complexity and cross-validated score
Plotting Learning Curves
Kernel Density Estimation
Caching nearest neighbors
Dimensionality Reduction with Neighborhood Components Analysis
Restricted Boltzmann Machine features for digit classification
Compare Stochastic learning strategies for MLPClassifier
Pipelining: chaining a PCA and a logistic regression
Selecting dimensionality reduction with Pipeline and GridSearchCV
Label Propagation digits: Demonstrating performance
Label Propagation digits active learning
Digits Classification Exercise
Cross-validation on Digits Dataset Exercise | sklearn.modules.generated.sklearn.datasets.load_digits |
sklearn.datasets.load_files
sklearn.datasets.load_files(container_path, *, description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0) [source]
Load text files with categories as subfolder names. Individual samples are assumed to be files stored a two levels folder structure such as the following: container_folder/
category_1_folder/
file_1.txt file_2.txt … file_42.txt category_2_folder/
file_43.txt file_44.txt … The folder names are used as supervised signal label names. The individual file names are not important. This function does not try to extract features into a numpy array or scipy sparse matrix. In addition, if load_content is false it does not try to load the files in memory. To use text files in a scikit-learn classification or clustering algorithm, you will need to use the :mod`~sklearn.feature_extraction.text` module to build a feature extraction transformer that suits your problem. If you set load_content=True, you should also specify the encoding of the text using the ‘encoding’ parameter. For many modern text files, ‘utf-8’ will be the correct encoding. If you leave encoding equal to None, then the content will be made of bytes instead of Unicode, and you will not be able to use most functions in text. Similar feature extractors should be built for other kind of unstructured data input such as images, audio, video, … Read more in the User Guide. Parameters
container_pathstr or unicode
Path to the main folder holding one subfolder per category
descriptionstr or unicode, default=None
A paragraph describing the characteristic of the dataset: its source, reference, etc.
categorieslist of str, default=None
If None (default), load all the categories. If not None, list of category names to load (other categories ignored).
load_contentbool, default=True
Whether to load or not the content of the different files. If true a ‘data’ attribute containing the text information is present in the data structure returned. If not, a filenames attribute gives the path to the files.
shufflebool, default=True
Whether or not to shuffle the data: might be important for models that make the assumption that the samples are independent and identically distributed (i.i.d.), such as stochastic gradient descent.
encodingstr, default=None
If None, do not try to decode the content of the files (e.g. for images or other non-text content). If not None, encoding to use to decode text files to Unicode if load_content is True.
decode_error{‘strict’, ‘ignore’, ‘replace’}, default=’strict’
Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. Passed as keyword argument ‘errors’ to bytes.decode.
random_stateint, RandomState instance or None, default=0
Determines random number generation for dataset shuffling. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
dataBunch
Dictionary-like object, with the following attributes.
datalist of str
Only present when load_content=True. The raw text data to learn.
targetndarray
The target labels (integer index).
target_nameslist
The names of target classes.
DESCRstr
The full description of the dataset. filenames: ndarray
The filenames holding the dataset. | sklearn.modules.generated.sklearn.datasets.load_files |
sklearn.datasets.load_iris
sklearn.datasets.load_iris(*, return_X_y=False, as_frame=False) [source]
Load and return the iris dataset (classification). The iris dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class 50
Samples total 150
Dimensionality 4
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (150, 4)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (150,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (150, 5)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. filename: str
The path to the location of the data. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. Notes Changed in version 0.20: Fixed two wrong data points according to Fisher’s paper. The new version is the same as in R, but not as in the UCI Machine Learning Repository. Examples Let’s say you are interested in the samples 10, 25, and 50, and want to know their class name. >>> from sklearn.datasets import load_iris
>>> data = load_iris()
>>> data.target[[10, 25, 50]]
array([0, 0, 1])
>>> list(data.target_names)
['setosa', 'versicolor', 'virginica']
Examples using sklearn.datasets.load_iris
Release Highlights for scikit-learn 0.24
Release Highlights for scikit-learn 0.22
Plot classification probability
Plot Hierarchical Clustering Dendrogram
K-means Clustering
The Iris Dataset
Plot the decision surface of a decision tree on the iris dataset
Understanding the decision tree structure
PCA example with Iris Data-set
Incremental PCA
Comparison of LDA and PCA 2D projection of Iris dataset
Factor Analysis (with rotation) to visualize patterns
Plot the decision boundaries of a VotingClassifier
Early stopping of Gradient Boosting
Plot the decision surfaces of ensembles of trees on the iris dataset
Test with permutations the significance of a classification score
Univariate Feature Selection
GMM covariances
Gaussian process classification (GPC) on iris dataset
Regularization path of L1- Logistic Regression
Logistic Regression 3-class Classifier
Plot multi-class SGD on the iris dataset
Confusion matrix
Receiver Operating Characteristic (ROC) with cross validation
Nested versus non-nested cross-validation
Receiver Operating Characteristic (ROC)
Precision-Recall
Nearest Centroid Classification
Nearest Neighbors Classification
Comparing Nearest Neighbors with and without Neighborhood Components Analysis
Compare Stochastic learning strategies for MLPClassifier
Concatenating multiple feature extraction methods
Decision boundary of semi-supervised classifiers versus SVM on the Iris dataset
SVM with custom kernel
SVM-Anova: SVM with univariate feature selection
Plot different SVM classifiers in the iris dataset
RBF SVM parameters
SVM Exercise | sklearn.modules.generated.sklearn.datasets.load_iris |
sklearn.datasets.load_linnerud
sklearn.datasets.load_linnerud(*, return_X_y=False, as_frame=False) [source]
Load and return the physical excercise linnerud dataset. This dataset is suitable for multi-ouput regression tasks.
Samples total 20
Dimensionality 3 (for both data and target)
Features integer
Targets integer Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object. New in version 0.18.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric, string or categorical). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (20, 3)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, dataframe} of shape (20, 3)
The regression targets. If as_frame=True, target will be a pandas DataFrame. feature_names: list
The names of the dataset columns. target_names: list
The names of the target columns. frame: DataFrame of shape (20, 6)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset. data_filename: str
The path to the location of the data. target_filename: str
The path to the location of the target. New in version 0.20.
(data, target)tuple if return_X_y is True
New in version 0.18. | sklearn.modules.generated.sklearn.datasets.load_linnerud |
sklearn.datasets.load_sample_image
sklearn.datasets.load_sample_image(image_name) [source]
Load the numpy array of a single sample image Read more in the User Guide. Parameters
image_name{china.jpg, flower.jpg}
The name of the sample image loaded Returns
img3D array
The image as a numpy array: height x width x color Examples >>> from sklearn.datasets import load_sample_image
>>> china = load_sample_image('china.jpg')
>>> china.dtype
dtype('uint8')
>>> china.shape
(427, 640, 3)
>>> flower = load_sample_image('flower.jpg')
>>> flower.dtype
dtype('uint8')
>>> flower.shape
(427, 640, 3)
Examples using sklearn.datasets.load_sample_image
Color Quantization using K-Means | sklearn.modules.generated.sklearn.datasets.load_sample_image |
sklearn.datasets.load_sample_images
sklearn.datasets.load_sample_images() [source]
Load sample images for image manipulation. Loads both, china and flower. Read more in the User Guide. Returns
dataBunch
Dictionary-like object, with the following attributes.
imageslist of ndarray of shape (427, 640, 3)
The two sample image.
filenameslist
The filenames for the images.
DESCRstr
The full description of the dataset. Examples To load the data and visualize the images: >>> from sklearn.datasets import load_sample_images
>>> dataset = load_sample_images()
>>> len(dataset.images)
2
>>> first_img_data = dataset.images[0]
>>> first_img_data.shape
(427, 640, 3)
>>> first_img_data.dtype
dtype('uint8') | sklearn.modules.generated.sklearn.datasets.load_sample_images |
sklearn.datasets.load_svmlight_file
sklearn.datasets.load_svmlight_file(f, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load datasets in the svmlight / libsvm format into sparse CSR matrix This format is a text-based format, with one sample per line. It does not store zero valued features hence is suitable for sparse dataset. The first element of each line can be used to store a target variable to predict. This format is used as the default format for both svmlight and the libsvm command line programs. Parsing a text based source can be expensive. When working on repeatedly on the same dataset, it is recommended to wrap this loader with joblib.Memory.cache to store a memmapped backup of the CSR results of the first call and benefit from the near instantaneous loading of memmapped structures for the subsequent calls. In case the file contains a pairwise preference constraint (known as “qid” in the svmlight format) these are ignored unless the query_id parameter is set to True. These pairwise preference constraints can be used to constraint the combination of samples when using pairwise loss functions (as is the case in some learning to rank problems) so that only pairs with the same query_id value are considered. This implementation is written in Cython and is reasonably fast. However, a faster API-compatible loader is also available at: https://github.com/mblondel/svmlight-loader Parameters
fstr, file-like or int
(Path to) a file to load. If a path ends in “.gz” or “.bz2”, it will be uncompressed on the fly. If an integer is passed, it is assumed to be a file descriptor. A file-like or file descriptor will not be closed by this function. A file-like object must be opened in binary mode.
n_featuresint, default=None
The number of features to use. If None, it will be inferred. This argument is useful to load several files that are subsets of a bigger sliced dataset: each subset might not have examples of every feature, hence the inferred shape might vary from one slice to another. n_features is only required if offset or length are passed a non-default value.
dtypenumpy data type, default=np.float64
Data type of dataset to be loaded. This will be the data type of the output numpy arrays X and y.
multilabelbool, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
zero_basedbool or “auto”, default=”auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe when no offset or length is passed. If offset or length are passed, the “auto” mode falls back to zero_based=True to avoid having the heuristic check yield inconsistent results on different segments of the file.
query_idbool, default=False
If True, will return the query_id array for each file.
offsetint, default=0
Ignore the offset first bytes by seeking forward, then discarding the following bytes up until the next new line character.
lengthint, default=-1
If strictly positive, stop reading any new line of data once the position in the file has reached the (offset + length) bytes threshold. Returns
Xscipy.sparse matrix of shape (n_samples, n_features)
yndarray of shape (n_samples,), or, in the multilabel a list of
tuples of length n_samples.
query_idarray of shape (n_samples,)
query_id for each sample. Only returned when query_id is set to True. See also
load_svmlight_files
Similar function for loading multiple files in this format, enforcing the same number of features/columns on all of them. Examples To use joblib.Memory to cache the svmlight file: from joblib import Memory
from .datasets import load_svmlight_file
mem = Memory("./mycache")
@mem.cache
def get_data():
data = load_svmlight_file("mysvmlightfile")
return data[0], data[1]
X, y = get_data() | sklearn.modules.generated.sklearn.datasets.load_svmlight_file |
sklearn.datasets.load_svmlight_files
sklearn.datasets.load_svmlight_files(files, *, n_features=None, dtype=<class 'numpy.float64'>, multilabel=False, zero_based='auto', query_id=False, offset=0, length=-1) [source]
Load dataset from multiple files in SVMlight format This function is equivalent to mapping load_svmlight_file over a list of files, except that the results are concatenated into a single, flat list and the samples vectors are constrained to all have the same number of features. In case the file contains a pairwise preference constraint (known as “qid” in the svmlight format) these are ignored unless the query_id parameter is set to True. These pairwise preference constraints can be used to constraint the combination of samples when using pairwise loss functions (as is the case in some learning to rank problems) so that only pairs with the same query_id value are considered. Parameters
filesarray-like, dtype=str, file-like or int
(Paths of) files to load. If a path ends in “.gz” or “.bz2”, it will be uncompressed on the fly. If an integer is passed, it is assumed to be a file descriptor. File-likes and file descriptors will not be closed by this function. File-like objects must be opened in binary mode.
n_featuresint, default=None
The number of features to use. If None, it will be inferred from the maximum column index occurring in any of the files. This can be set to a higher value than the actual number of features in any of the input files, but setting it to a lower value will cause an exception to be raised.
dtypenumpy data type, default=np.float64
Data type of dataset to be loaded. This will be the data type of the output numpy arrays X and y.
multilabelbool, default=False
Samples may have several labels each (see https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html)
zero_basedbool or “auto”, default=”auto”
Whether column indices in f are zero-based (True) or one-based (False). If column indices are one-based, they are transformed to zero-based to match Python/NumPy conventions. If set to “auto”, a heuristic check is applied to determine this from the file contents. Both kinds of files occur “in the wild”, but they are unfortunately not self-identifying. Using “auto” or True should always be safe when no offset or length is passed. If offset or length are passed, the “auto” mode falls back to zero_based=True to avoid having the heuristic check yield inconsistent results on different segments of the file.
query_idbool, default=False
If True, will return the query_id array for each file.
offsetint, default=0
Ignore the offset first bytes by seeking forward, then discarding the following bytes up until the next new line character.
lengthint, default=-1
If strictly positive, stop reading any new line of data once the position in the file has reached the (offset + length) bytes threshold. Returns
[X1, y1, …, Xn, yn]
where each (Xi, yi) pair is the result from load_svmlight_file(files[i]).
If query_id is set to True, this will return instead [X1, y1, q1,
…, Xn, yn, qn] where (Xi, yi, qi) is the result from
load_svmlight_file(files[i])
See also
load_svmlight_file
Notes When fitting a model to a matrix X_train and evaluating it against a matrix X_test, it is essential that X_train and X_test have the same number of features (X_train.shape[1] == X_test.shape[1]). This may not be the case if you load the files individually with load_svmlight_file. | sklearn.modules.generated.sklearn.datasets.load_svmlight_files |
sklearn.datasets.load_wine
sklearn.datasets.load_wine(*, return_X_y=False, as_frame=False) [source]
Load and return the wine dataset (classification). New in version 0.18. The wine dataset is a classic and very easy multi-class classification dataset.
Classes 3
Samples per class [59,71,48]
Samples total 178
Dimensionality 13
Features real, positive Read more in the User Guide. Parameters
return_X_ybool, default=False
If True, returns (data, target) instead of a Bunch object. See below for more information about the data and target object.
as_framebool, default=False
If True, the data is a pandas DataFrame including columns with appropriate dtypes (numeric). The target is a pandas DataFrame or Series depending on the number of target columns. If return_X_y is True, then (data, target) will be pandas DataFrames or Series as described below. New in version 0.23. Returns
dataBunch
Dictionary-like object, with the following attributes.
data{ndarray, dataframe} of shape (178, 13)
The data matrix. If as_frame=True, data will be a pandas DataFrame. target: {ndarray, Series} of shape (178,)
The classification target. If as_frame=True, target will be a pandas Series. feature_names: list
The names of the dataset columns. target_names: list
The names of target classes. frame: DataFrame of shape (178, 14)
Only present when as_frame=True. DataFrame with data and target. New in version 0.23. DESCR: str
The full description of the dataset.
(data, target)tuple if return_X_y is True
The copy of UCI ML Wine Data Set dataset is downloaded and modified to fit
standard format from:
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data
Examples Let’s say you are interested in the samples 10, 80, and 140, and want to know their class name. >>> from sklearn.datasets import load_wine
>>> data = load_wine()
>>> data.target[[10, 80, 140]]
array([0, 1, 2])
>>> list(data.target_names)
['class_0', 'class_1', 'class_2']
Examples using sklearn.datasets.load_wine
Outlier detection on a real data set
ROC Curve with Visualization API
Importance of Feature Scaling | sklearn.modules.generated.sklearn.datasets.load_wine |
sklearn.datasets.make_biclusters
sklearn.datasets.make_biclusters(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with constant block diagonal structure for biclustering. Read more in the User Guide. Parameters
shapeiterable of shape (n_rows, n_cols)
The shape of the result.
n_clustersint
The number of biclusters.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
minvalint, default=10
Minimum value of a bicluster.
maxvalint, default=100
Maximum value of a bicluster.
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape shape
The generated array.
rowsndarray of shape (n_clusters, X.shape[0])
The indicators for cluster membership of each row.
colsndarray of shape (n_clusters, X.shape[1])
The indicators for cluster membership of each column. See also
make_checkerboard
References
1
Dhillon, I. S. (2001, August). Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 269-274). ACM.
Examples using sklearn.datasets.make_biclusters
A demo of the Spectral Co-Clustering algorithm | sklearn.modules.generated.sklearn.datasets.make_biclusters |
sklearn.datasets.make_blobs
sklearn.datasets.make_blobs(n_samples=100, n_features=2, *, centers=None, cluster_std=1.0, center_box=- 10.0, 10.0, shuffle=True, random_state=None, return_centers=False) [source]
Generate isotropic Gaussian blobs for clustering. Read more in the User Guide. Parameters
n_samplesint or array-like, default=100
If int, it is the total number of points equally divided among clusters. If array-like, each element of the sequence indicates the number of samples per cluster. Changed in version v0.20: one can now pass an array-like to the n_samples parameter
n_featuresint, default=2
The number of features for each sample.
centersint or ndarray of shape (n_centers, n_features), default=None
The number of centers to generate, or the fixed center locations. If n_samples is an int and centers is None, 3 centers are generated. If n_samples is array-like, centers must be either None or an array of length equal to the length of n_samples.
cluster_stdfloat or array-like of float, default=1.0
The standard deviation of the clusters.
center_boxtuple of float (min, max), default=(-10.0, 10.0)
The bounding box for each cluster center when centers are generated at random.
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.
return_centersbool, default=False
If True, then return the centers of each cluster New in version 0.23. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
yndarray of shape (n_samples,)
The integer labels for cluster membership of each sample.
centersndarray of shape (n_centers, n_features)
The centers of each cluster. Only returned if return_centers=True. See also
make_classification
A more intricate variant. Examples >>> from sklearn.datasets import make_blobs
>>> X, y = make_blobs(n_samples=10, centers=3, n_features=2,
... random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 0, 1, 0, 2, 2, 2, 1, 1, 0])
>>> X, y = make_blobs(n_samples=[3, 3, 4], centers=None, n_features=2,
... random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 1, 2, 0, 2, 2, 2, 1, 1, 0])
Examples using sklearn.datasets.make_blobs
Release Highlights for scikit-learn 0.23
Probability calibration of classifiers
Probability Calibration for 3-class classification
Normal, Ledoit-Wolf and OAS Linear Discriminant Analysis for classification
An example of K-Means++ initialization
A demo of the mean-shift clustering algorithm
Demonstration of k-means assumptions
Demo of affinity propagation clustering algorithm
Demo of DBSCAN clustering algorithm
Inductive Clustering
Compare BIRCH and MiniBatchKMeans
Comparison of the K-Means and MiniBatchKMeans clustering algorithms
Comparing different hierarchical linkage methods on toy datasets
Selecting the number of clusters with silhouette analysis on KMeans clustering
Comparing different clustering algorithms on toy datasets
Plot randomly generated classification dataset
SGD: Maximum margin separating hyperplane
Plot multinomial and One-vs-Rest Logistic Regression
Comparing anomaly detection algorithms for outlier detection on toy datasets
Demonstrating the different strategies of KBinsDiscretizer
SVM: Maximum margin separating hyperplane
SVM Tie Breaking Example
Plot the support vectors in LinearSVC
SVM: Separating hyperplane for unbalanced classes | sklearn.modules.generated.sklearn.datasets.make_blobs |
sklearn.datasets.make_checkerboard
sklearn.datasets.make_checkerboard(shape, n_clusters, *, noise=0.0, minval=10, maxval=100, shuffle=True, random_state=None) [source]
Generate an array with block checkerboard structure for biclustering. Read more in the User Guide. Parameters
shapetuple of shape (n_rows, n_cols)
The shape of the result.
n_clustersint or array-like or shape (n_row_clusters, n_column_clusters)
The number of row and column clusters.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
minvalint, default=10
Minimum value of a bicluster.
maxvalint, default=100
Maximum value of a bicluster.
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape shape
The generated array.
rowsndarray of shape (n_clusters, X.shape[0])
The indicators for cluster membership of each row.
colsndarray of shape (n_clusters, X.shape[1])
The indicators for cluster membership of each column. See also
make_biclusters
References
1
Kluger, Y., Basri, R., Chang, J. T., & Gerstein, M. (2003). Spectral biclustering of microarray data: coclustering genes and conditions. Genome research, 13(4), 703-716.
Examples using sklearn.datasets.make_checkerboard
A demo of the Spectral Biclustering algorithm | sklearn.modules.generated.sklearn.datasets.make_checkerboard |
sklearn.datasets.make_circles
sklearn.datasets.make_circles(n_samples=100, *, shuffle=True, noise=None, random_state=None, factor=0.8) [source]
Make a large circle containing a smaller circle in 2d. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shape (2,), dtype=int, default=100
If int, it is the total number of points generated. For odd numbers, the inner circle will have one point more than the outer circle. If two-element tuple, number of points in outer circle and inner circle. Changed in version 0.23: Added two-element tuple.
shufflebool, default=True
Whether to shuffle the samples.
noisefloat, default=None
Standard deviation of Gaussian noise added to the data.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling and noise. Pass an int for reproducible output across multiple function calls. See Glossary.
factorfloat, default=.8
Scale factor between inner and outer circle in the range (0, 1). Returns
Xndarray of shape (n_samples, 2)
The generated samples.
yndarray of shape (n_samples,)
The integer labels (0 or 1) for class membership of each sample.
Examples using sklearn.datasets.make_circles
Classifier comparison
Comparing different hierarchical linkage methods on toy datasets
Comparing different clustering algorithms on toy datasets
Kernel PCA
Hashing feature transformation using Totally Random Trees
t-SNE: The effect of various perplexity values on the shape
Compare Stochastic learning strategies for MLPClassifier
Varying regularization in Multi-layer Perceptron
Feature discretization
Label Propagation learning a complex structure | sklearn.modules.generated.sklearn.datasets.make_circles |
sklearn.datasets.make_classification
sklearn.datasets.make_classification(n_samples=100, n_features=20, *, n_informative=2, n_redundant=2, n_repeated=0, n_classes=2, n_clusters_per_class=2, weights=None, flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=None) [source]
Generate a random n-class classification problem. This initially creates clusters of points normally distributed (std=1) about vertices of an n_informative-dimensional hypercube with sides of length 2*class_sep and assigns an equal number of clusters to each class. It introduces interdependence between these features and adds various types of further noise to the data. Without shuffling, X horizontally stacks features in the following order: the primary n_informative features, followed by n_redundant linear combinations of the informative features, followed by n_repeated duplicates, drawn randomly with replacement from the informative and redundant features. The remaining features are filled with random noise. Thus, without shuffling, all useful features are contained in the columns X[:, :n_informative + n_redundant + n_repeated]. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=20
The total number of features. These comprise n_informative informative features, n_redundant redundant features, n_repeated duplicated features and n_features-n_informative-n_redundant-n_repeated useless features drawn at random.
n_informativeint, default=2
The number of informative features. Each class is composed of a number of gaussian clusters each located around the vertices of a hypercube in a subspace of dimension n_informative. For each cluster, informative features are drawn independently from N(0, 1) and then randomly linearly combined within each cluster in order to add covariance. The clusters are then placed on the vertices of the hypercube.
n_redundantint, default=2
The number of redundant features. These features are generated as random linear combinations of the informative features.
n_repeatedint, default=0
The number of duplicated features, drawn randomly from the informative and the redundant features.
n_classesint, default=2
The number of classes (or labels) of the classification problem.
n_clusters_per_classint, default=2
The number of clusters per class.
weightsarray-like of shape (n_classes,) or (n_classes - 1,), default=None
The proportions of samples assigned to each class. If None, then classes are balanced. Note that if len(weights) == n_classes - 1, then the last class weight is automatically inferred. More than n_samples samples may be returned if the sum of weights exceeds 1. Note that the actual class proportions will not exactly match weights when flip_y isn’t 0.
flip_yfloat, default=0.01
The fraction of samples whose class is assigned randomly. Larger values introduce noise in the labels and make the classification task harder. Note that the default setting flip_y > 0 might lead to less than n_classes in y in some cases.
class_sepfloat, default=1.0
The factor multiplying the hypercube size. Larger values spread out the clusters/classes and make the classification task easier.
hypercubebool, default=True
If True, the clusters are put on the vertices of a hypercube. If False, the clusters are put on the vertices of a random polytope.
shiftfloat, ndarray of shape (n_features,) or None, default=0.0
Shift features by the specified value. If None, then features are shifted by a random value drawn in [-class_sep, class_sep].
scalefloat, ndarray of shape (n_features,) or None, default=1.0
Multiply features by the specified value. If None, then features are scaled by a random value drawn in [1, 100]. Note that scaling happens after shifting.
shufflebool, default=True
Shuffle the samples and the features.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
yndarray of shape (n_samples,)
The integer labels for class membership of each sample. See also
make_blobs
Simplified variant.
make_multilabel_classification
Unrelated generator for multilabel tasks. Notes The algorithm is adapted from Guyon [1] and was designed to generate the “Madelon” dataset. References
1
I. Guyon, “Design of experiments for the NIPS 2003 variable selection benchmark”, 2003.
Examples using sklearn.datasets.make_classification
Release Highlights for scikit-learn 0.24
Release Highlights for scikit-learn 0.22
Comparison of Calibration of Classifiers
Probability Calibration curves
Classifier comparison
Plot randomly generated classification dataset
Feature importances with forests of trees
OOB Errors for Random Forests
Feature transformations with ensembles of trees
Pipeline Anova SVM
Recursive feature elimination with cross-validation
Detection error tradeoff (DET) curve
Successive Halving Iterations
Comparison between grid search and successive halving
Neighborhood Components Analysis Illustration
Varying regularization in Multi-layer Perceptron
Feature discretization
Scaling the regularization parameter for SVCs | sklearn.modules.generated.sklearn.datasets.make_classification |
sklearn.datasets.make_friedman1
sklearn.datasets.make_friedman1(n_samples=100, n_features=10, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #1” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are independent features uniformly distributed on the interval [0, 1]. The output y is created according to the formula: y(X) = 10 * sin(pi * X[:, 0] * X[:, 1]) + 20 * (X[:, 2] - 0.5) ** 2 + 10 * X[:, 3] + 5 * X[:, 4] + noise * N(0, 1).
Out of the n_features features, only 5 are actually used to compute y. The remaining features are independent of y. The number of features has to be >= 5. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=10
The number of features. Should be at least 5.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | sklearn.modules.generated.sklearn.datasets.make_friedman1 |
sklearn.datasets.make_friedman2
sklearn.datasets.make_friedman2(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #2” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 * pi,
0 <= X[:, 2] <= 1,
1 <= X[:, 3] <= 11.
The output y is created according to the formula: y(X) = (X[:, 0] ** 2 + (X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) ** 2) ** 0.5 + noise * N(0, 1).
Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 4)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | sklearn.modules.generated.sklearn.datasets.make_friedman2 |
sklearn.datasets.make_friedman3
sklearn.datasets.make_friedman3(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate the “Friedman #3” regression problem. This dataset is described in Friedman [1] and Breiman [2]. Inputs X are 4 independent features uniformly distributed on the intervals: 0 <= X[:, 0] <= 100,
40 * pi <= X[:, 1] <= 560 * pi,
0 <= X[:, 2] <= 1,
1 <= X[:, 3] <= 11.
The output y is created according to the formula: y(X) = arctan((X[:, 1] * X[:, 2] - 1 / (X[:, 1] * X[:, 3])) / X[:, 0]) + noise * N(0, 1).
Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 4)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
J. Friedman, “Multivariate adaptive regression splines”, The Annals of Statistics 19 (1), pages 1-67, 1991.
2
L. Breiman, “Bagging predictors”, Machine Learning 24, pages 123-140, 1996. | sklearn.modules.generated.sklearn.datasets.make_friedman3 |
sklearn.datasets.make_gaussian_quantiles
sklearn.datasets.make_gaussian_quantiles(*, mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=3, shuffle=True, random_state=None) [source]
Generate isotropic Gaussian and label samples by quantile. This classification dataset is constructed by taking a multi-dimensional standard normal distribution and defining classes separated by nested concentric multi-dimensional spheres such that roughly equal numbers of samples are in each class (quantiles of the \(\chi^2\) distribution). Read more in the User Guide. Parameters
meanndarray of shape (n_features,), default=None
The mean of the multi-dimensional normal distribution. If None then use the origin (0, 0, …).
covfloat, default=1.0
The covariance matrix will be this value times the unit matrix. This dataset only produces symmetric normal distributions.
n_samplesint, default=100
The total number of points equally divided among classes.
n_featuresint, default=2
The number of features for each sample.
n_classesint, default=3
The number of classes
shufflebool, default=True
Shuffle the samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
yndarray of shape (n_samples,)
The integer labels for quantile membership of each sample. Notes The dataset is from Zhu et al [1]. References
1
Zhu, H. Zou, S. Rosset, T. Hastie, “Multi-class AdaBoost”, 2009.
Examples using sklearn.datasets.make_gaussian_quantiles
Plot randomly generated classification dataset
Two-class AdaBoost
Multi-class AdaBoosted Decision Trees | sklearn.modules.generated.sklearn.datasets.make_gaussian_quantiles |
sklearn.datasets.make_hastie_10_2
sklearn.datasets.make_hastie_10_2(n_samples=12000, *, random_state=None) [source]
Generates data for binary classification used in Hastie et al. 2009, Example 10.2. The ten features are standard independent Gaussian and the target y is defined by: y[i] = 1 if np.sum(X[i] ** 2) > 9.34 else -1
Read more in the User Guide. Parameters
n_samplesint, default=12000
The number of samples.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 10)
The input samples.
yndarray of shape (n_samples,)
The output values. See also
make_gaussian_quantiles
A generalization of this dataset approach. References
1
T. Hastie, R. Tibshirani and J. Friedman, “Elements of Statistical Learning Ed. 2”, Springer, 2009.
Examples using sklearn.datasets.make_hastie_10_2
Gradient Boosting regularization
Discrete versus Real AdaBoost
Early stopping of Gradient Boosting
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV | sklearn.modules.generated.sklearn.datasets.make_hastie_10_2 |
sklearn.datasets.make_low_rank_matrix
sklearn.datasets.make_low_rank_matrix(n_samples=100, n_features=100, *, effective_rank=10, tail_strength=0.5, random_state=None) [source]
Generate a mostly low rank matrix with bell-shaped singular values. Most of the variance can be explained by a bell-shaped curve of width effective_rank: the low rank part of the singular values profile is: (1 - tail_strength) * exp(-1.0 * (i / effective_rank) ** 2)
The remaining singular values’ tail is fat, decreasing as: tail_strength * exp(-0.1 * i / effective_rank).
The low rank part of the profile can be considered the structured signal part of the data while the tail can be considered the noisy part of the data that cannot be summarized by a low number of linear components (singular vectors). This kind of singular profiles is often seen in practice, for instance:
gray level pictures of faces TF-IDF vectors of text documents crawled from the web Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=100
The number of features.
effective_rankint, default=10
The approximate number of singular vectors required to explain most of the data by linear combinations.
tail_strengthfloat, default=0.5
The relative importance of the fat noisy tail of the singular values profile. The value should be between 0 and 1.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The matrix. | sklearn.modules.generated.sklearn.datasets.make_low_rank_matrix |
sklearn.datasets.make_moons
sklearn.datasets.make_moons(n_samples=100, *, shuffle=True, noise=None, random_state=None) [source]
Make two interleaving half circles. A simple toy dataset to visualize clustering and classification algorithms. Read more in the User Guide. Parameters
n_samplesint or tuple of shape (2,), dtype=int, default=100
If int, the total number of points generated. If two-element tuple, number of points in each of two moons. Changed in version 0.23: Added two-element tuple.
shufflebool, default=True
Whether to shuffle the samples.
noisefloat, default=None
Standard deviation of Gaussian noise added to the data.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset shuffling and noise. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 2)
The generated samples.
yndarray of shape (n_samples,)
The integer labels (0 or 1) for class membership of each sample.
Examples using sklearn.datasets.make_moons
Classifier comparison
Comparing different hierarchical linkage methods on toy datasets
Comparing different clustering algorithms on toy datasets
Comparing anomaly detection algorithms for outlier detection on toy datasets
Statistical comparison of models using grid search
Compare Stochastic learning strategies for MLPClassifier
Varying regularization in Multi-layer Perceptron
Feature discretization | sklearn.modules.generated.sklearn.datasets.make_moons |
sklearn.datasets.make_multilabel_classification
sklearn.datasets.make_multilabel_classification(n_samples=100, n_features=20, *, n_classes=5, n_labels=2, length=50, allow_unlabeled=True, sparse=False, return_indicator='dense', return_distributions=False, random_state=None) [source]
Generate a random multilabel classification problem. For each sample, the generative process is:
pick the number of labels: n ~ Poisson(n_labels) n times, choose a class c: c ~ Multinomial(theta) pick the document length: k ~ Poisson(length) k times, choose a word: w ~ Multinomial(theta_c) In the above process, rejection sampling is used to make sure that n is never zero or more than n_classes, and that the document length is never zero. Likewise, we reject classes which have already been chosen. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=20
The total number of features.
n_classesint, default=5
The number of classes of the classification problem.
n_labelsint, default=2
The average number of labels per instance. More precisely, the number of labels per sample is drawn from a Poisson distribution with n_labels as its expected value, but samples are bounded (using rejection sampling) by n_classes, and must be nonzero if allow_unlabeled is False.
lengthint, default=50
The sum of the features (number of words if documents) is drawn from a Poisson distribution with this expected value.
allow_unlabeledbool, default=True
If True, some instances might not belong to any class.
sparsebool, default=False
If True, return a sparse feature matrix New in version 0.17: parameter to allow sparse output.
return_indicator{‘dense’, ‘sparse’} or False, default=’dense’
If 'dense' return Y in the dense binary indicator format. If 'sparse' return Y in the sparse binary indicator format. False returns a list of lists of labels.
return_distributionsbool, default=False
If True, return the prior class probability and conditional probabilities of features given classes, from which the data was drawn.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The generated samples.
Y{ndarray, sparse matrix} of shape (n_samples, n_classes)
The label sets. Sparse matrix should be of CSR format.
p_cndarray of shape (n_classes,)
The probability of each class being drawn. Only returned if return_distributions=True.
p_w_cndarray of shape (n_features, n_classes)
The probability of each feature being drawn given each class. Only returned if return_distributions=True.
Examples using sklearn.datasets.make_multilabel_classification
Plot randomly generated multilabel dataset
Multilabel classification | sklearn.modules.generated.sklearn.datasets.make_multilabel_classification |
sklearn.datasets.make_regression
sklearn.datasets.make_regression(n_samples=100, n_features=100, *, n_informative=10, n_targets=1, bias=0.0, effective_rank=None, tail_strength=0.5, noise=0.0, shuffle=True, coef=False, random_state=None) [source]
Generate a random regression problem. The input set can either be well conditioned (by default) or have a low rank-fat tail singular profile. See make_low_rank_matrix for more details. The output is generated by applying a (potentially biased) random linear regression model with n_informative nonzero regressors to the previously generated input and some gaussian centered noise with some adjustable scale. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=100
The number of features.
n_informativeint, default=10
The number of informative features, i.e., the number of features used to build the linear model used to generate the output.
n_targetsint, default=1
The number of regression targets, i.e., the dimension of the y output vector associated with a sample. By default, the output is a scalar.
biasfloat, default=0.0
The bias term in the underlying linear model.
effective_rankint, default=None
if not None:
The approximate number of singular vectors required to explain most of the input data by linear combinations. Using this kind of singular spectrum in the input allows the generator to reproduce the correlations often observed in practice. if None:
The input set is well conditioned, centered and gaussian with unit variance.
tail_strengthfloat, default=0.5
The relative importance of the fat noisy tail of the singular values profile if effective_rank is not None. When a float, it should be between 0 and 1.
noisefloat, default=0.0
The standard deviation of the gaussian noise applied to the output.
shufflebool, default=True
Shuffle the samples and the features.
coefbool, default=False
If True, the coefficients of the underlying linear model are returned.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,) or (n_samples, n_targets)
The output values.
coefndarray of shape (n_features,) or (n_features, n_targets)
The coefficient of the underlying linear model. It is returned only if coef is True.
Examples using sklearn.datasets.make_regression
Release Highlights for scikit-learn 0.23
Prediction Latency
Plot Ridge coefficients as a function of the L2 regularization
Robust linear model estimation using RANSAC
HuberRegressor vs Ridge on dataset with strong outliers
Lasso on dense and sparse data
Effect of transforming the targets in regression model | sklearn.modules.generated.sklearn.datasets.make_regression |
sklearn.datasets.make_sparse_coded_signal
sklearn.datasets.make_sparse_coded_signal(n_samples, *, n_components, n_features, n_nonzero_coefs, random_state=None) [source]
Generate a signal as a sparse combination of dictionary elements. Returns a matrix Y = DX, such as D is (n_features, n_components), X is (n_components, n_samples) and each column of X has exactly n_nonzero_coefs non-zero elements. Read more in the User Guide. Parameters
n_samplesint
Number of samples to generate
n_componentsint
Number of components in the dictionary
n_featuresint
Number of features of the dataset to generate
n_nonzero_coefsint
Number of active (non-zero) coefficients in each sample
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
datandarray of shape (n_features, n_samples)
The encoded signal (Y).
dictionaryndarray of shape (n_features, n_components)
The dictionary with normalized components (D).
codendarray of shape (n_components, n_samples)
The sparse code such that each column of this matrix has exactly n_nonzero_coefs non-zero items (X).
Examples using sklearn.datasets.make_sparse_coded_signal
Orthogonal Matching Pursuit | sklearn.modules.generated.sklearn.datasets.make_sparse_coded_signal |
sklearn.datasets.make_sparse_spd_matrix
sklearn.datasets.make_sparse_spd_matrix(dim=1, *, alpha=0.95, norm_diag=False, smallest_coef=0.1, largest_coef=0.9, random_state=None) [source]
Generate a sparse symmetric definite positive matrix. Read more in the User Guide. Parameters
dimint, default=1
The size of the random matrix to generate.
alphafloat, default=0.95
The probability that a coefficient is zero (see notes). Larger values enforce more sparsity. The value should be in the range 0 and 1.
norm_diagbool, default=False
Whether to normalize the output matrix to make the leading diagonal elements all 1
smallest_coeffloat, default=0.1
The value of the smallest coefficient between 0 and 1.
largest_coeffloat, default=0.9
The value of the largest coefficient between 0 and 1.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
precsparse matrix of shape (dim, dim)
The generated matrix. See also
make_spd_matrix
Notes The sparsity is actually imposed on the cholesky factor of the matrix. Thus alpha does not translate directly into the filling fraction of the matrix itself.
Examples using sklearn.datasets.make_sparse_spd_matrix
Sparse inverse covariance estimation | sklearn.modules.generated.sklearn.datasets.make_sparse_spd_matrix |
sklearn.datasets.make_sparse_uncorrelated
sklearn.datasets.make_sparse_uncorrelated(n_samples=100, n_features=10, *, random_state=None) [source]
Generate a random regression problem with sparse uncorrelated design. This dataset is described in Celeux et al [1]. as: X ~ N(0, 1)
y(X) = X[:, 0] + 2 * X[:, 1] - 2 * X[:, 2] - 1.5 * X[:, 3]
Only the first 4 features are informative. The remaining features are useless. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of samples.
n_featuresint, default=10
The number of features.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, n_features)
The input samples.
yndarray of shape (n_samples,)
The output values. References
1
G. Celeux, M. El Anbari, J.-M. Marin, C. P. Robert, “Regularization in regression: comparing Bayesian and frequentist methods in a poorly informative situation”, 2009. | sklearn.modules.generated.sklearn.datasets.make_sparse_uncorrelated |
sklearn.datasets.make_spd_matrix
sklearn.datasets.make_spd_matrix(n_dim, *, random_state=None) [source]
Generate a random symmetric, positive-definite matrix. Read more in the User Guide. Parameters
n_dimint
The matrix dimension.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_dim, n_dim)
The random symmetric, positive-definite matrix. See also
make_sparse_spd_matrix | sklearn.modules.generated.sklearn.datasets.make_spd_matrix |
sklearn.datasets.make_swiss_roll
sklearn.datasets.make_swiss_roll(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate a swiss roll dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 3)
The points.
tndarray of shape (n_samples,)
The univariate position of the sample according to the main dimension of the points in the manifold. Notes The algorithm is from Marsland [1]. References
1
S. Marsland, “Machine Learning: An Algorithmic Perspective”, Chapter 10, 2009. http://seat.massey.ac.nz/personal/s.r.marsland/Code/10/lle.py
Examples using sklearn.datasets.make_swiss_roll
Hierarchical clustering: structured vs unstructured ward
Swiss Roll reduction with LLE | sklearn.modules.generated.sklearn.datasets.make_swiss_roll |
sklearn.datasets.make_s_curve
sklearn.datasets.make_s_curve(n_samples=100, *, noise=0.0, random_state=None) [source]
Generate an S curve dataset. Read more in the User Guide. Parameters
n_samplesint, default=100
The number of sample points on the S curve.
noisefloat, default=0.0
The standard deviation of the gaussian noise.
random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary. Returns
Xndarray of shape (n_samples, 3)
The points.
tndarray of shape (n_samples,)
The univariate position of the sample according to the main dimension of the points in the manifold.
Examples using sklearn.datasets.make_s_curve
Comparison of Manifold Learning methods
t-SNE: The effect of various perplexity values on the shape | sklearn.modules.generated.sklearn.datasets.make_s_curve |
sklearn.decomposition.dict_learning
sklearn.decomposition.dict_learning(X, n_components, *, alpha, max_iter=100, tol=1e-08, method='lars', n_jobs=None, dict_init=None, code_init=None, callback=None, verbose=False, random_state=None, return_n_iter=False, positive_dict=False, positive_code=False, method_max_iter=1000) [source]
Solves a dictionary learning matrix factorization problem. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
where V is the dictionary and U is the sparse code. Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Data matrix.
n_componentsint
Number of dictionary atoms to extract.
alphaint
Sparsity controlling parameter.
max_iterint, default=100
Maximum number of iterations to perform.
tolfloat, default=1e-8
Tolerance for the stopping condition.
method{‘lars’, ‘cd’}, default=’lars’
The method used:
'lars': uses the least angle regression method to solve the lasso
problem (linear_model.lars_path);
'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
dict_initndarray of shape (n_components, n_features), default=None
Initial value for the dictionary for warm restart scenarios.
code_initndarray of shape (n_samples, n_components), default=None
Initial value for the sparse code for warm restart scenarios.
callbackcallable, default=None
Callable that gets invoked every five iterations
verbosebool, default=False
To control the verbosity of the procedure.
random_stateint, RandomState instance or None, default=None
Used for randomly initializing the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
return_n_iterbool, default=False
Whether or not to return the number of iterations.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary. New in version 0.20.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
method_max_iterint, default=1000
Maximum number of iterations to perform. New in version 0.22. Returns
codendarray of shape (n_samples, n_components)
The sparse code factor in the matrix factorization.
dictionaryndarray of shape (n_components, n_features),
The dictionary factor in the matrix factorization.
errorsarray
Vector of errors at each iteration.
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
dict_learning_online
DictionaryLearning
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA | sklearn.modules.generated.sklearn.decomposition.dict_learning |
sklearn.decomposition.dict_learning_online
sklearn.decomposition.dict_learning_online(X, n_components=2, *, alpha=1, n_iter=100, return_code=True, dict_init=None, callback=None, batch_size=3, verbose=False, shuffle=True, n_jobs=None, method='lars', iter_offset=0, random_state=None, return_inner_stats=False, inner_stats=None, return_n_iter=False, positive_dict=False, positive_code=False, method_max_iter=1000) [source]
Solves a dictionary learning matrix factorization problem online. Finds the best dictionary and the corresponding sparse code for approximating the data matrix X by solving: (U^*, V^*) = argmin 0.5 || X - U V ||_2^2 + alpha * || U ||_1
(U,V)
with || V_k ||_2 = 1 for all 0 <= k < n_components
where V is the dictionary and U is the sparse code. This is accomplished by repeatedly iterating over mini-batches by slicing the input data. Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Data matrix.
n_componentsint, default=2
Number of dictionary atoms to extract.
alphafloat, default=1
Sparsity controlling parameter.
n_iterint, default=100
Number of mini-batch iterations to perform.
return_codebool, default=True
Whether to also return the code U or just the dictionary V.
dict_initndarray of shape (n_components, n_features), default=None
Initial value for the dictionary for warm restart scenarios.
callbackcallable, default=None
callable that gets invoked every five iterations.
batch_sizeint, default=3
The number of samples to take in each batch.
verbosebool, default=False
To control the verbosity of the procedure.
shufflebool, default=True
Whether to shuffle the data before splitting it in batches.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
method{‘lars’, ‘cd’}, default=’lars’
'lars': uses the least angle regression method to solve the lasso problem (linear_model.lars_path);
'cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.
iter_offsetint, default=0
Number of previous iterations completed on the dictionary used for initialization.
random_stateint, RandomState instance or None, default=None
Used for initializing the dictionary when dict_init is not specified, randomly shuffling the data when shuffle is set to True, and updating the dictionary. Pass an int for reproducible results across multiple function calls. See Glossary.
return_inner_statsbool, default=False
Return the inner statistics A (dictionary covariance) and B (data approximation). Useful to restart the algorithm in an online setting. If return_inner_stats is True, return_code is ignored.
inner_statstuple of (A, B) ndarrays, default=None
Inner sufficient statistics that are kept by the algorithm. Passing them at initialization is useful in online settings, to avoid losing the history of the evolution. A (n_components, n_components) is the dictionary covariance matrix. B (n_features, n_components) is the data approximation matrix.
return_n_iterbool, default=False
Whether or not to return the number of iterations.
positive_dictbool, default=False
Whether to enforce positivity when finding the dictionary. New in version 0.20.
positive_codebool, default=False
Whether to enforce positivity when finding the code. New in version 0.20.
method_max_iterint, default=1000
Maximum number of iterations to perform when solving the lasso problem. New in version 0.22. Returns
codendarray of shape (n_samples, n_components),
The sparse code (only returned if return_code=True).
dictionaryndarray of shape (n_components, n_features),
The solutions to the dictionary learning problem.
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
dict_learning
DictionaryLearning
MiniBatchDictionaryLearning
SparsePCA
MiniBatchSparsePCA | sklearn.modules.generated.sklearn.decomposition.dict_learning_online |
sklearn.decomposition.fastica
sklearn.decomposition.fastica(X, n_components=None, *, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False) [source]
Perform Fast Independent Component Analysis. Read more in the User Guide. Parameters
Xarray-like of shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
n_componentsint, default=None
Number of components to extract. If None no dimension reduction is performed.
algorithm{‘parallel’, ‘deflation’}, default=’parallel’
Apply a parallel or deflational FASTICA algorithm.
whitenbool, default=True
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white. Otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
fun{‘logcosh’, ‘exp’, ‘cube’} or callable, default=’logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example: def my_g(x):
return x ** 3, np.mean(3 * x ** 2, axis=-1)
fun_argsdict, default=None
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}
max_iterint, default=200
Maximum number of iterations to perform.
tolfloat, default=1e-04
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
w_initndarray of shape (n_components, n_components), default=None
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used.
random_stateint, RandomState instance or None, default=None
Used to initialize w_init when not specified, with a normal distribution. Pass an int, for reproducible results across multiple function calls. See Glossary.
return_X_meanbool, default=False
If True, X_mean is returned too.
compute_sourcesbool, default=True
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
Kndarray of shape (n_components, n_features) or None
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
Wndarray of shape (n_components, n_components)
The square matrix that unmixes the data after whitening. The mixing matrix is the pseudo-inverse of matrix W K if K is not None, else it is the inverse of W.
Sndarray of shape (n_samples, n_components) or None
Estimated source matrix
X_meanndarray of shape (n_features,)
The mean over features. Returned only if return_X_mean is True.
n_iterint
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return_n_iter is set to True. Notes The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix' the data by estimating an
un-mixing matrix W where ``S = W K X.` While FastICA was proposed to estimate as many sources as features, it is possible to estimate less by setting n_components < n_features. It this case K is not a square matrix and the estimated A is the pseudo-inverse of W K. This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input. Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430 | sklearn.modules.generated.fastica-function |
sklearn.decomposition.non_negative_factorization
sklearn.decomposition.non_negative_factorization(X, W=None, H=None, n_components=None, *, init='warn', update_H=True, solver='cd', beta_loss='frobenius', tol=0.0001, max_iter=200, alpha=0.0, l1_ratio=0.0, regularization=None, random_state=None, verbose=0, shuffle=False) [source]
Compute Non-negative Matrix Factorization (NMF). Find two non-negative matrices (W, H) whose product approximates the non- negative matrix X. This factorization can be used for example for dimensionality reduction, source separation or topic extraction. The objective function is: \[ \begin{align}\begin{aligned}0.5 * ||X - WH||_{Fro}^2 + alpha * l1_{ratio} * ||vec(W)||_1\\+ alpha * l1_{ratio} * ||vec(H)||_1\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||W||_{Fro}^2\\+ 0.5 * alpha * (1 - l1_{ratio}) * ||H||_{Fro}^2\end{aligned}\end{align} \] Where: \(||A||_{Fro}^2 = \sum_{i,j} A_{ij}^2\) (Frobenius norm) \(||vec(A)||_1 = \sum_{i,j} abs(A_{ij})\) (Elementwise L1 norm) For multiplicative-update (‘mu’) solver, the Frobenius norm \((0.5 * ||X - WH||_{Fro}^2)\) can be changed into another beta-divergence loss, by changing the beta_loss parameter. The objective function is minimized with an alternating minimization of W and H. If H is given and update_H=False, it solves for W only. Parameters
Xarray-like of shape (n_samples, n_features)
Constant matrix.
Warray-like of shape (n_samples, n_components), default=None
If init=’custom’, it is used as initial guess for the solution.
Harray-like of shape (n_components, n_features), default=None
If init=’custom’, it is used as initial guess for the solution. If update_H=False, it is used as a constant, to solve for W only.
n_componentsint, default=None
Number of components, if n_components is not set all features are kept.
init{‘random’, ‘nndsvd’, ‘nndsvda’, ‘nndsvdar’, ‘custom’}, default=None
Method used to initialize the procedure. Valid options: None: ‘nndsvd’ if n_components < n_features, otherwise ‘random’.
‘random’: non-negative random matrices, scaled with:
sqrt(X.mean() / n_components)
‘nndsvd’: Nonnegative Double Singular Value Decomposition (NNDSVD)
initialization (better for sparseness)
‘nndsvda’: NNDSVD with zeros filled with the average of X
(better when sparsity is not desired)
‘nndsvdar’: NNDSVD with zeros filled with small random values
(generally faster, less accurate alternative to NNDSVDa for when sparsity is not desired) ‘custom’: use custom matrices W and H if update_H=True. If update_H=False, then only custom matrix H is used. Changed in version 0.23: The default value of init changed from ‘random’ to None in 0.23.
update_Hbool, default=True
Set to True, both W and H will be estimated from initial guesses. Set to False, only W will be estimated.
solver{‘cd’, ‘mu’}, default=’cd’
Numerical solver to use:
‘cd’ is a Coordinate Descent solver that uses Fast Hierarchical
Alternating Least Squares (Fast HALS). ‘mu’ is a Multiplicative Update solver. New in version 0.17: Coordinate Descent solver. New in version 0.19: Multiplicative Update solver.
beta_lossfloat or {‘frobenius’, ‘kullback-leibler’, ‘itakura-saito’}, default=’frobenius’
Beta divergence to be minimized, measuring the distance between X and the dot product WH. Note that values different from ‘frobenius’ (or 2) and ‘kullback-leibler’ (or 1) lead to significantly slower fits. Note that for beta_loss <= 0 (or ‘itakura-saito’), the input matrix X cannot contain zeros. Used only in ‘mu’ solver. New in version 0.19.
tolfloat, default=1e-4
Tolerance of the stopping condition.
max_iterint, default=200
Maximum number of iterations before timing out.
alphafloat, default=0.
Constant that multiplies the regularization terms.
l1_ratiofloat, default=0.
The regularization mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an elementwise L2 penalty (aka Frobenius Norm). For l1_ratio = 1 it is an elementwise L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2.
regularization{‘both’, ‘components’, ‘transformation’}, default=None
Select whether the regularization affects the components (H), the transformation (W), both or none of them.
random_stateint, RandomState instance or None, default=None
Used for NMF initialisation (when init == ‘nndsvdar’ or ‘random’), and in Coordinate Descent. Pass an int for reproducible results across multiple function calls. See Glossary.
verboseint, default=0
The verbosity level.
shufflebool, default=False
If true, randomize the order of coordinates in the CD solver. Returns
Wndarray of shape (n_samples, n_components)
Solution to the non-negative least squares problem.
Hndarray of shape (n_components, n_features)
Solution to the non-negative least squares problem.
n_iterint
Actual number of iterations. References Cichocki, Andrzej, and P. H. A. N. Anh-Huy. “Fast local algorithms for large scale nonnegative matrix and tensor factorizations.” IEICE transactions on fundamentals of electronics, communications and computer sciences 92.3: 708-721, 2009. Fevotte, C., & Idier, J. (2011). Algorithms for nonnegative matrix factorization with the beta-divergence. Neural Computation, 23(9). Examples >>> import numpy as np
>>> X = np.array([[1,1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]])
>>> from sklearn.decomposition import non_negative_factorization
>>> W, H, n_iter = non_negative_factorization(X, n_components=2,
... init='random', random_state=0) | sklearn.modules.generated.sklearn.decomposition.non_negative_factorization |
sklearn.decomposition.sparse_encode
sklearn.decomposition.sparse_encode(X, dictionary, *, gram=None, cov=None, algorithm='lasso_lars', n_nonzero_coefs=None, alpha=None, copy_cov=True, init=None, max_iter=1000, n_jobs=None, check_input=True, verbose=0, positive=False) [source]
Sparse coding Each row of the result is the solution to a sparse coding problem. The goal is to find a sparse array code such that: X ~= code * dictionary
Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Data matrix.
dictionaryndarray of shape (n_components, n_features)
The dictionary matrix against which to solve the sparse coding of the data. Some of the algorithms assume normalized rows for meaningful output.
gramndarray of shape (n_components, n_components), default=None
Precomputed Gram matrix, dictionary * dictionary'.
covndarray of shape (n_components, n_samples), default=None
Precomputed covariance, dictionary' * X.
algorithm{‘lasso_lars’, ‘lasso_cd’, ‘lars’, ‘omp’, ‘threshold’}, default=’lasso_lars’
The algorithm used:
'lars': uses the least angle regression method (linear_model.lars_path);
'lasso_lars': uses Lars to compute the Lasso solution;
'lasso_cd': uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). lasso_lars will be faster if the estimated components are sparse;
'omp': uses orthogonal matching pursuit to estimate the sparse solution;
'threshold': squashes to zero all coefficients less than regularization from the projection dictionary * data'.
n_nonzero_coefsint, default=None
Number of nonzero coefficients to target in each column of the solution. This is only used by algorithm='lars' and algorithm='omp' and is overridden by alpha in the omp case. If None, then n_nonzero_coefs=int(n_features / 10).
alphafloat, default=None
If algorithm='lasso_lars' or algorithm='lasso_cd', alpha is the penalty applied to the L1 norm. If algorithm='threshold', alpha is the absolute value of the threshold below which coefficients will be squashed to zero. If algorithm='omp', alpha is the tolerance parameter: the value of the reconstruction error targeted. In this case, it overrides n_nonzero_coefs. If None, default to 1.
copy_covbool, default=True
Whether to copy the precomputed covariance matrix; if False, it may be overwritten.
initndarray of shape (n_samples, n_components), default=None
Initialization value of the sparse codes. Only used if algorithm='lasso_cd'.
max_iterint, default=1000
Maximum number of iterations to perform if algorithm='lasso_cd' or 'lasso_lars'.
n_jobsint, default=None
Number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
check_inputbool, default=True
If False, the input arrays X and dictionary will not be checked.
verboseint, default=0
Controls the verbosity; the higher, the more messages.
positivebool, default=False
Whether to enforce positivity when finding the encoding. New in version 0.20. Returns
codendarray of shape (n_samples, n_components)
The sparse codes See also
sklearn.linear_model.lars_path
sklearn.linear_model.orthogonal_mp
sklearn.linear_model.Lasso
SparseCoder | sklearn.modules.generated.sklearn.decomposition.sparse_encode |
sklearn.feature_extraction.image.extract_patches_2d
sklearn.feature_extraction.image.extract_patches_2d(image, patch_size, *, max_patches=None, random_state=None) [source]
Reshape a 2D image into a collection of patches The resulting patches are allocated in a dedicated array. Read more in the User Guide. Parameters
imagendarray of shape (image_height, image_width) or (image_height, image_width, n_channels)
The original image data. For color images, the last dimension specifies the channel: a RGB image would have n_channels=3.
patch_sizetuple of int (patch_height, patch_width)
The dimensions of one patch.
max_patchesint or float, default=None
The maximum number of patches to extract. If max_patches is a float between 0 and 1, it is taken to be a proportion of the total number of patches.
random_stateint, RandomState instance, default=None
Determines the random number generator used for random sampling when max_patches is not None. Use an int to make the randomness deterministic. See Glossary. Returns
patchesarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The collection of patches extracted from the image, where n_patches is either max_patches or the total number of patches that can be extracted. Examples >>> from sklearn.datasets import load_sample_image
>>> from sklearn.feature_extraction import image
>>> # Use the array data from the first image in this dataset:
>>> one_image = load_sample_image("china.jpg")
>>> print('Image shape: {}'.format(one_image.shape))
Image shape: (427, 640, 3)
>>> patches = image.extract_patches_2d(one_image, (2, 2))
>>> print('Patches shape: {}'.format(patches.shape))
Patches shape: (272214, 2, 2, 3)
>>> # Here are just two of these patches:
>>> print(patches[1])
[[[174 201 231]
[174 201 231]]
[[173 200 230]
[173 200 230]]]
>>> print(patches[800])
[[[187 214 243]
[188 215 244]]
[[187 214 243]
[188 215 244]]]
Examples using sklearn.feature_extraction.image.extract_patches_2d
Online learning of a dictionary of parts of faces
Image denoising using dictionary learning | sklearn.modules.generated.sklearn.feature_extraction.image.extract_patches_2d |
sklearn.feature_extraction.image.grid_to_graph
sklearn.feature_extraction.image.grid_to_graph(n_x, n_y, n_z=1, *, mask=None, return_as=<class 'scipy.sparse.coo.coo_matrix'>, dtype=<class 'int'>) [source]
Graph of the pixel-to-pixel connections Edges exist if 2 voxels are connected. Parameters
n_xint
Dimension in x axis
n_yint
Dimension in y axis
n_zint, default=1
Dimension in z axis
maskndarray of shape (n_x, n_y, n_z), dtype=bool, default=None
An optional mask of the image, to consider only part of the pixels.
return_asnp.ndarray or a sparse matrix class, default=sparse.coo_matrix
The class to use to build the returned adjacency matrix.
dtypedtype, default=int
The data of the returned sparse matrix. By default it is int Notes For scikit-learn versions 0.14.1 and prior, return_as=np.ndarray was handled by returning a dense np.matrix instance. Going forward, np.ndarray returns an np.ndarray, as expected. For compatibility, user code relying on this method should wrap its calls in np.asarray to avoid type issues. | sklearn.modules.generated.sklearn.feature_extraction.image.grid_to_graph |
sklearn.feature_extraction.image.img_to_graph
sklearn.feature_extraction.image.img_to_graph(img, *, mask=None, return_as=<class 'scipy.sparse.coo.coo_matrix'>, dtype=None) [source]
Graph of the pixel-to-pixel gradient connections Edges are weighted with the gradient values. Read more in the User Guide. Parameters
imgndarray of shape (height, width) or (height, width, channel)
2D or 3D image.
maskndarray of shape (height, width) or (height, width, channel), dtype=bool, default=None
An optional mask of the image, to consider only part of the pixels.
return_asnp.ndarray or a sparse matrix class, default=sparse.coo_matrix
The class to use to build the returned adjacency matrix.
dtypedtype, default=None
The data of the returned sparse matrix. By default it is the dtype of img Notes For scikit-learn versions 0.14.1 and prior, return_as=np.ndarray was handled by returning a dense np.matrix instance. Going forward, np.ndarray returns an np.ndarray, as expected. For compatibility, user code relying on this method should wrap its calls in np.asarray to avoid type issues. | sklearn.modules.generated.sklearn.feature_extraction.image.img_to_graph |
sklearn.feature_extraction.image.reconstruct_from_patches_2d
sklearn.feature_extraction.image.reconstruct_from_patches_2d(patches, image_size) [source]
Reconstruct the image from all of its patches. Patches are assumed to overlap and the image is constructed by filling in the patches from left to right, top to bottom, averaging the overlapping regions. Read more in the User Guide. Parameters
patchesndarray of shape (n_patches, patch_height, patch_width) or (n_patches, patch_height, patch_width, n_channels)
The complete set of patches. If the patches contain colour information, channels are indexed along the last dimension: RGB patches would have n_channels=3.
image_sizetuple of int (image_height, image_width) or (image_height, image_width, n_channels)
The size of the image that will be reconstructed. Returns
imagendarray of shape image_size
The reconstructed image.
Examples using sklearn.feature_extraction.image.reconstruct_from_patches_2d
Image denoising using dictionary learning | sklearn.modules.generated.sklearn.feature_extraction.image.reconstruct_from_patches_2d |
sklearn.feature_selection.chi2
sklearn.feature_selection.chi2(X, y) [source]
Compute chi-squared stats between each non-negative feature and class. This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes. Recall that the chi-square test measures dependence between stochastic variables, so using this function “weeds out” the features that are the most likely to be independent of class and therefore irrelevant for classification. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Sample vectors.
yarray-like of shape (n_samples,)
Target vector (class labels). Returns
chi2array, shape = (n_features,)
chi2 statistics of each feature.
pvalarray, shape = (n_features,)
p-values of each feature. See also
f_classif
ANOVA F-value between label/feature for classification tasks.
f_regression
F-value between label/feature for regression tasks. Notes Complexity of this algorithm is O(n_classes * n_features).
Examples using sklearn.feature_selection.chi2
Selecting dimensionality reduction with Pipeline and GridSearchCV
SVM-Anova: SVM with univariate feature selection
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.feature_selection.chi2 |
sklearn.feature_selection.f_classif
sklearn.feature_selection.f_classif(X, y) [source]
Compute the ANOVA F-value for the provided sample. Read more in the User Guide. Parameters
X{array-like, sparse matrix} shape = [n_samples, n_features]
The set of regressors that will be tested sequentially.
yarray of shape(n_samples)
The data matrix. Returns
Farray, shape = [n_features,]
The set of F values.
pvalarray, shape = [n_features,]
The set of p-values. See also
chi2
Chi-squared stats of non-negative features for classification tasks.
f_regression
F-value between label/feature for regression tasks.
Examples using sklearn.feature_selection.f_classif
Pipeline Anova SVM
Univariate Feature Selection | sklearn.modules.generated.sklearn.feature_selection.f_classif |
sklearn.feature_selection.f_regression
sklearn.feature_selection.f_regression(X, y, *, center=True) [source]
Univariate linear regression tests. Linear model for testing the individual effect of each of many regressors. This is a scoring function to be used in a feature selection procedure, not a free standing feature selection procedure. This is done in 2 steps: The correlation between each regressor and the target is computed, that is, ((X[:, i] - mean(X[:, i])) * (y - mean_y)) / (std(X[:, i]) * std(y)). It is converted to an F score then to a p-value. For more on usage see the User Guide. Parameters
X{array-like, sparse matrix} shape = (n_samples, n_features)
The set of regressors that will be tested sequentially.
yarray of shape(n_samples).
The data matrix
centerbool, default=True
If true, X and y will be centered. Returns
Farray, shape=(n_features,)
F values of features.
pvalarray, shape=(n_features,)
p-values of F-scores. See also
mutual_info_regression
Mutual information for a continuous target.
f_classif
ANOVA F-value between label/feature for classification tasks.
chi2
Chi-squared stats of non-negative features for classification tasks.
SelectKBest
Select features based on the k highest scores.
SelectFpr
Select features based on a false positive rate test.
SelectFdr
Select features based on an estimated false discovery rate.
SelectFwe
Select features based on family-wise error rate.
SelectPercentile
Select features based on percentile of the highest scores.
Examples using sklearn.feature_selection.f_regression
Feature agglomeration vs. univariate selection
Comparison of F-test and mutual information | sklearn.modules.generated.sklearn.feature_selection.f_regression |
sklearn.feature_selection.mutual_info_classif
sklearn.feature_selection.mutual_info_classif(X, y, *, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source]
Estimate mutual information for a discrete target variable. Mutual information (MI) [1] between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency. The function relies on nonparametric methods based on entropy estimation from k-nearest neighbors distances as described in [2] and [3]. Both methods are based on the idea originally proposed in [4]. It can be used for univariate features selection, read more in the User Guide. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Feature matrix.
yarray-like of shape (n_samples,)
Target vector.
discrete_features{‘auto’, bool, array-like}, default=’auto’
If bool, then determines whether to consider all features discrete or continuous. If array, then it should be either a boolean mask with shape (n_features,) or array with indices of discrete features. If ‘auto’, it is assigned to False for dense X and to True for sparse X.
n_neighborsint, default=3
Number of neighbors to use for MI estimation for continuous variables, see [2] and [3]. Higher values reduce variance of the estimation, but could introduce a bias.
copybool, default=True
Whether to make a copy of the given data. If set to False, the initial data will be overwritten.
random_stateint, RandomState instance or None, default=None
Determines random number generation for adding small noise to continuous variables in order to remove repeated values. Pass an int for reproducible results across multiple function calls. See Glossary. Returns
mindarray, shape (n_features,)
Estimated mutual information between each feature and the target. Notes The term “discrete features” is used instead of naming them “categorical”, because it describes the essence more accurately. For example, pixel intensities of an image are discrete features (but hardly categorical) and you will get better results if mark them as such. Also note, that treating a continuous variable as discrete and vice versa will usually give incorrect results, so be attentive about that. True mutual information can’t be negative. If its estimate turns out to be negative, it is replaced by zero. References
1
Mutual Information on Wikipedia.
2(1,2)
A. Kraskov, H. Stogbauer and P. Grassberger, “Estimating mutual information”. Phys. Rev. E 69, 2004.
3(1,2)
B. C. Ross “Mutual Information between Discrete and Continuous Data Sets”. PLoS ONE 9(2), 2014.
4
L. F. Kozachenko, N. N. Leonenko, “Sample Estimate of the Entropy of a Random Vector:, Probl. Peredachi Inf., 23:2 (1987), 9-16 | sklearn.modules.generated.sklearn.feature_selection.mutual_info_classif |
sklearn.feature_selection.mutual_info_regression
sklearn.feature_selection.mutual_info_regression(X, y, *, discrete_features='auto', n_neighbors=3, copy=True, random_state=None) [source]
Estimate mutual information for a continuous target variable. Mutual information (MI) [1] between two random variables is a non-negative value, which measures the dependency between the variables. It is equal to zero if and only if two random variables are independent, and higher values mean higher dependency. The function relies on nonparametric methods based on entropy estimation from k-nearest neighbors distances as described in [2] and [3]. Both methods are based on the idea originally proposed in [4]. It can be used for univariate features selection, read more in the User Guide. Parameters
Xarray-like or sparse matrix, shape (n_samples, n_features)
Feature matrix.
yarray-like of shape (n_samples,)
Target vector.
discrete_features{‘auto’, bool, array-like}, default=’auto’
If bool, then determines whether to consider all features discrete or continuous. If array, then it should be either a boolean mask with shape (n_features,) or array with indices of discrete features. If ‘auto’, it is assigned to False for dense X and to True for sparse X.
n_neighborsint, default=3
Number of neighbors to use for MI estimation for continuous variables, see [2] and [3]. Higher values reduce variance of the estimation, but could introduce a bias.
copybool, default=True
Whether to make a copy of the given data. If set to False, the initial data will be overwritten.
random_stateint, RandomState instance or None, default=None
Determines random number generation for adding small noise to continuous variables in order to remove repeated values. Pass an int for reproducible results across multiple function calls. See Glossary. Returns
mindarray, shape (n_features,)
Estimated mutual information between each feature and the target. Notes The term “discrete features” is used instead of naming them “categorical”, because it describes the essence more accurately. For example, pixel intensities of an image are discrete features (but hardly categorical) and you will get better results if mark them as such. Also note, that treating a continuous variable as discrete and vice versa will usually give incorrect results, so be attentive about that. True mutual information can’t be negative. If its estimate turns out to be negative, it is replaced by zero. References
1
Mutual Information on Wikipedia.
2(1,2)
A. Kraskov, H. Stogbauer and P. Grassberger, “Estimating mutual information”. Phys. Rev. E 69, 2004.
3(1,2)
B. C. Ross “Mutual Information between Discrete and Continuous Data Sets”. PLoS ONE 9(2), 2014.
4
L. F. Kozachenko, N. N. Leonenko, “Sample Estimate of the Entropy of a Random Vector”, Probl. Peredachi Inf., 23:2 (1987), 9-16
Examples using sklearn.feature_selection.mutual_info_regression
Comparison of F-test and mutual information | sklearn.modules.generated.sklearn.feature_selection.mutual_info_regression |
sklearn.get_config
sklearn.get_config() [source]
Retrieve current values for configuration set by set_config Returns
configdict
Keys are parameter names that can be passed to set_config. See also
config_context
Context manager for global scikit-learn configuration.
set_config
Set global scikit-learn configuration. | sklearn.modules.generated.sklearn.get_config |
sklearn.inspection.partial_dependence
sklearn.inspection.partial_dependence(estimator, X, features, *, response_method='auto', percentiles=0.05, 0.95, grid_resolution=100, method='auto', kind='legacy') [source]
Partial dependence of features. Partial dependence of a feature (or a set of features) corresponds to the average response of an estimator for each possible value of the feature. Read more in the User Guide. Warning For GradientBoostingClassifier and GradientBoostingRegressor, the 'recursion' method (used by default) will not account for the init predictor of the boosting process. In practice, this will produce the same values as 'brute' up to a constant offset in the target response, provided that init is a constant estimator (which is the default). However, if init is not a constant estimator, the partial dependence values are incorrect for 'recursion' because the offset will be sample-dependent. It is preferable to use the 'brute' method. Note that this only applies to GradientBoostingClassifier and GradientBoostingRegressor, not to HistGradientBoostingClassifier and HistGradientBoostingRegressor. Parameters
estimatorBaseEstimator
A fitted estimator object implementing predict, predict_proba, or decision_function. Multioutput-multiclass classifiers are not supported.
X{array-like or dataframe} of shape (n_samples, n_features)
X is used to generate a grid of values for the target features (where the partial dependence will be evaluated), and also to generate values for the complement features when the method is ‘brute’.
featuresarray-like of {int, str}
The feature (e.g. [0]) or pair of interacting features (e.g. [(0, 1)]) for which the partial dependency should be computed.
response_method{‘auto’, ‘predict_proba’, ‘decision_function’}, default=’auto’
Specifies whether to use predict_proba or decision_function as the target response. For regressors this parameter is ignored and the response is always the output of predict. By default, predict_proba is tried first and we revert to decision_function if it doesn’t exist. If method is ‘recursion’, the response is always the output of decision_function.
percentilestuple of float, default=(0.05, 0.95)
The lower and upper percentile used to create the extreme values for the grid. Must be in [0, 1].
grid_resolutionint, default=100
The number of equally spaced points on the grid, for each target feature.
method{‘auto’, ‘recursion’, ‘brute’}, default=’auto’
The method used to calculate the averaged predictions:
'recursion' is only supported for some tree-based estimators (namely GradientBoostingClassifier, GradientBoostingRegressor, HistGradientBoostingClassifier, HistGradientBoostingRegressor, DecisionTreeRegressor, RandomForestRegressor, ) when kind='average'. This is more efficient in terms of speed. With this method, the target response of a classifier is always the decision function, not the predicted probabilities. Since the 'recursion' method implicitely computes the average of the Individual Conditional Expectation (ICE) by design, it is not compatible with ICE and thus kind must be 'average'.
'brute' is supported for any estimator, but is more computationally intensive.
'auto': the 'recursion' is used for estimators that support it, and 'brute' is used otherwise. Please see this note for differences between the 'brute' and 'recursion' method.
kind{‘legacy’, ‘average’, ‘individual’, ‘both’}, default=’legacy’
Whether to return the partial dependence averaged across all the samples in the dataset or one line per sample or both. See Returns below. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24. Deprecated since version 0.24: kind='legacy' is deprecated and will be removed in version 1.1. kind='average' will be the new default. It is intended to migrate from the ndarray output to Bunch output. Returns
predictionsndarray or Bunch
if kind='legacy', return value is ndarray of shape (n_outputs, len(values[0]), len(values[1]), …)
The predictions for all the points in the grid, averaged over all samples in X (or over the training data if method is ‘recursion’).
if kind='individual', 'average' or 'both', return value is Bunch
Dictionary-like object, with the following attributes.
individualndarray of shape (n_outputs, n_instances, len(values[0]), len(values[1]), …)
The predictions for all the points in the grid for all samples in X. This is also known as Individual Conditional Expectation (ICE)
averagendarray of shape (n_outputs, len(values[0]), len(values[1]), …)
The predictions for all the points in the grid, averaged over all samples in X (or over the training data if method is ‘recursion’). Only available when kind=’both’.
valuesseq of 1d ndarrays
The values with which the grid has been created. The generated grid is a cartesian product of the arrays in values. len(values) == len(features). The size of each array values[j] is either grid_resolution, or the number of unique values in X[:, j], whichever is smaller. n_outputs corresponds to the number of classes in a multi-class setting, or to the number of tasks for multi-output regression. For classical regression and binary classification n_outputs==1. n_values_feature_j corresponds to the size values[j].
valuesseq of 1d ndarrays
The values with which the grid has been created. The generated grid is a cartesian product of the arrays in values. len(values) ==
len(features). The size of each array values[j] is either grid_resolution, or the number of unique values in X[:, j], whichever is smaller. Only available when kind="legacy". See also
plot_partial_dependence
Plot Partial Dependence.
PartialDependenceDisplay
Partial Dependence visualization. Examples >>> X = [[0, 0, 2], [1, 0, 0]]
>>> y = [0, 1]
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> gb = GradientBoostingClassifier(random_state=0).fit(X, y)
>>> partial_dependence(gb, features=[0], X=X, percentiles=(0, 1),
... grid_resolution=2)
(array([[-4.52..., 4.52...]]), [array([ 0., 1.])])
Examples using sklearn.inspection.partial_dependence
Partial Dependence and Individual Conditional Expectation Plots | sklearn.modules.generated.sklearn.inspection.partial_dependence |
sklearn.inspection.permutation_importance
sklearn.inspection.permutation_importance(estimator, X, y, *, scoring=None, n_repeats=5, n_jobs=None, random_state=None, sample_weight=None) [source]
Permutation importance for feature evaluation [BRE]. The estimator is required to be a fitted estimator. X can be the data set used to train the estimator or a hold-out set. The permutation importance of a feature is calculated as follows. First, a baseline metric, defined by scoring, is evaluated on a (potentially different) dataset defined by the X. Next, a feature column from the validation set is permuted and the metric is evaluated again. The permutation importance is defined to be the difference between the baseline metric and metric from permutating the feature column. Read more in the User Guide. Parameters
estimatorobject
An estimator that has already been fitted and is compatible with scorer.
Xndarray or DataFrame, shape (n_samples, n_features)
Data on which permutation importance will be computed.
yarray-like or None, shape (n_samples, ) or (n_samples, n_classes)
Targets for supervised or None for unsupervised.
scoringstring, callable or None, default=None
Scorer to use. It can be a single string (see The scoring parameter: defining model evaluation rules) or a callable (see Defining your scoring strategy from metric functions). If None, the estimator’s default scorer is used.
n_repeatsint, default=5
Number of times to permute a feature.
n_jobsint or None, default=None
Number of jobs to run in parallel. The computation is done by computing permutation score for each columns and parallelized over the columns. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
random_stateint, RandomState instance, default=None
Pseudo-random number generator to control the permutations of each feature. Pass an int to get reproducible results across function calls. See :term: Glossary <random_state>.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights used in scoring. New in version 0.24. Returns
resultBunch
Dictionary-like object, with the following attributes.
importances_meanndarray, shape (n_features, )
Mean of feature importance over n_repeats.
importances_stdndarray, shape (n_features, )
Standard deviation over n_repeats.
importancesndarray, shape (n_features, n_repeats)
Raw permutation importance scores. References
BRE
L. Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001. https://doi.org/10.1023/A:1010933404324 Examples >>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.inspection import permutation_importance
>>> X = [[1, 9, 9],[1, 9, 9],[1, 9, 9],
... [0, 9, 9],[0, 9, 9],[0, 9, 9]]
>>> y = [1, 1, 1, 0, 0, 0]
>>> clf = LogisticRegression().fit(X, y)
>>> result = permutation_importance(clf, X, y, n_repeats=10,
... random_state=0)
>>> result.importances_mean
array([0.4666..., 0. , 0. ])
>>> result.importances_std
array([0.2211..., 0. , 0. ])
Examples using sklearn.inspection.permutation_importance
Release Highlights for scikit-learn 0.22
Feature importances with forests of trees
Gradient Boosting regression
Permutation Importance with Multicollinear or Correlated Features
Permutation Importance vs Random Forest Feature Importance (MDI) | sklearn.modules.generated.sklearn.inspection.permutation_importance |
sklearn.inspection.plot_partial_dependence
sklearn.inspection.plot_partial_dependence(estimator, X, features, *, feature_names=None, target=None, response_method='auto', n_cols=3, grid_resolution=100, percentiles=0.05, 0.95, method='auto', n_jobs=None, verbose=0, line_kw=None, contour_kw=None, ax=None, kind='average', subsample=1000, random_state=None) [source]
Partial dependence (PD) and individual conditional expectation (ICE) plots. Partial dependence plots, individual conditional expectation plots or an overlay of both of them can be plotted by setting the kind parameter. The len(features) plots are arranged in a grid with n_cols columns. Two-way partial dependence plots are plotted as contour plots. The deciles of the feature values will be shown with tick marks on the x-axes for one-way plots, and on both axes for two-way plots. Read more in the User Guide. Note plot_partial_dependence does not support using the same axes with multiple calls. To plot the the partial dependence for multiple estimators, please pass the axes created by the first call to the second call: >>> from sklearn.inspection import plot_partial_dependence
>>> from sklearn.datasets import make_friedman1
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.ensemble import RandomForestRegressor
>>> X, y = make_friedman1()
>>> est1 = LinearRegression().fit(X, y)
>>> est2 = RandomForestRegressor().fit(X, y)
>>> disp1 = plot_partial_dependence(est1, X,
... [1, 2])
>>> disp2 = plot_partial_dependence(est2, X, [1, 2],
... ax=disp1.axes_)
Warning For GradientBoostingClassifier and GradientBoostingRegressor, the 'recursion' method (used by default) will not account for the init predictor of the boosting process. In practice, this will produce the same values as 'brute' up to a constant offset in the target response, provided that init is a constant estimator (which is the default). However, if init is not a constant estimator, the partial dependence values are incorrect for 'recursion' because the offset will be sample-dependent. It is preferable to use the 'brute' method. Note that this only applies to GradientBoostingClassifier and GradientBoostingRegressor, not to HistGradientBoostingClassifier and HistGradientBoostingRegressor. Parameters
estimatorBaseEstimator
A fitted estimator object implementing predict, predict_proba, or decision_function. Multioutput-multiclass classifiers are not supported.
X{array-like or dataframe} of shape (n_samples, n_features)
X is used to generate a grid of values for the target features (where the partial dependence will be evaluated), and also to generate values for the complement features when the method is 'brute'.
featureslist of {int, str, pair of int, pair of str}
The target features for which to create the PDPs. If features[i] is an integer or a string, a one-way PDP is created; if features[i] is a tuple, a two-way PDP is created (only supported with kind='average'). Each tuple must be of size 2. if any entry is a string, then it must be in feature_names.
feature_namesarray-like of shape (n_features,), dtype=str, default=None
Name of each feature; feature_names[i] holds the name of the feature with index i. By default, the name of the feature corresponds to their numerical index for NumPy array and their column name for pandas dataframe.
targetint, default=None
In a multiclass setting, specifies the class for which the PDPs should be computed. Note that for binary classification, the positive class (index 1) is always used. In a multioutput setting, specifies the task for which the PDPs should be computed. Ignored in binary classification or classical regression settings.
response_method{‘auto’, ‘predict_proba’, ‘decision_function’}, default=’auto’
Specifies whether to use predict_proba or decision_function as the target response. For regressors this parameter is ignored and the response is always the output of predict. By default, predict_proba is tried first and we revert to decision_function if it doesn’t exist. If method is 'recursion', the response is always the output of decision_function.
n_colsint, default=3
The maximum number of columns in the grid plot. Only active when ax is a single axis or None.
grid_resolutionint, default=100
The number of equally spaced points on the axes of the plots, for each target feature.
percentilestuple of float, default=(0.05, 0.95)
The lower and upper percentile used to create the extreme values for the PDP axes. Must be in [0, 1].
methodstr, default=’auto’
The method used to calculate the averaged predictions:
'recursion' is only supported for some tree-based estimators (namely GradientBoostingClassifier, GradientBoostingRegressor, HistGradientBoostingClassifier, HistGradientBoostingRegressor, DecisionTreeRegressor, RandomForestRegressor but is more efficient in terms of speed. With this method, the target response of a classifier is always the decision function, not the predicted probabilities. Since the 'recursion' method implicitely computes the average of the ICEs by design, it is not compatible with ICE and thus kind must be 'average'.
'brute' is supported for any estimator, but is more computationally intensive.
'auto': the 'recursion' is used for estimators that support it, and 'brute' is used otherwise. Please see this note for differences between the 'brute' and 'recursion' method.
n_jobsint, default=None
The number of CPUs to use to compute the partial dependences. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
verboseint, default=0
Verbose output during PD computations.
line_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.plot call. For one-way partial dependence plots.
contour_kwdict, default=None
Dict with keywords passed to the matplotlib.pyplot.contourf call. For two-way partial dependence plots.
axMatplotlib axes or array-like of Matplotlib axes, default=None
If a single axis is passed in, it is treated as a bounding axes and a grid of partial dependence plots will be drawn within these bounds. The n_cols parameter controls the number of columns in the grid. If an array-like of axes are passed in, the partial dependence plots will be drawn directly into these axes. If None, a figure and a bounding axes is created and treated as the single axes case. New in version 0.22.
kind{‘average’, ‘individual’, ‘both’}, default=’average’
Whether to plot the partial dependence averaged across all the samples in the dataset or one line per sample or both.
kind='average' results in the traditional PD plot;
kind='individual' results in the ICE plot. Note that the fast method='recursion' option is only available for kind='average'. Plotting individual dependencies requires using the slower method='brute' option. New in version 0.24.
subsamplefloat, int or None, default=1000
Sampling for ICE curves when kind is ‘individual’ or ‘both’. If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to be used to plot ICE curves. If int, represents the absolute number samples to use. Note that the full dataset is still used to calculate averaged partial dependence when kind='both'. New in version 0.24.
random_stateint, RandomState instance or None, default=None
Controls the randomness of the selected samples when subsamples is not None and kind is either 'both' or 'individual'. See Glossary for details. New in version 0.24. Returns
displayPartialDependenceDisplay
See also
partial_dependence
Compute Partial Dependence values.
PartialDependenceDisplay
Partial Dependence visualization. Examples >>> from sklearn.datasets import make_friedman1
>>> from sklearn.ensemble import GradientBoostingRegressor
>>> X, y = make_friedman1()
>>> clf = GradientBoostingRegressor(n_estimators=10).fit(X, y)
>>> plot_partial_dependence(clf, X, [0, (0, 1)])
Examples using sklearn.inspection.plot_partial_dependence
Release Highlights for scikit-learn 0.23
Release Highlights for scikit-learn 0.24
Monotonic Constraints
Partial Dependence and Individual Conditional Expectation Plots
Advanced Plotting With Partial Dependence | sklearn.modules.generated.sklearn.inspection.plot_partial_dependence |
sklearn.isotonic.check_increasing
sklearn.isotonic.check_increasing(x, y) [source]
Determine whether y is monotonically correlated with x. y is found increasing or decreasing with respect to x based on a Spearman correlation test. Parameters
xarray-like of shape (n_samples,)
Training data.
yarray-like of shape (n_samples,)
Training target. Returns
increasing_boolboolean
Whether the relationship is increasing or decreasing. Notes The Spearman correlation coefficient is estimated from the data, and the sign of the resulting estimate is used as the result. In the event that the 95% confidence interval based on Fisher transform spans zero, a warning is raised. References Fisher transformation. Wikipedia. https://en.wikipedia.org/wiki/Fisher_transformation | sklearn.modules.generated.sklearn.isotonic.check_increasing |
sklearn.isotonic.isotonic_regression
sklearn.isotonic.isotonic_regression(y, *, sample_weight=None, y_min=None, y_max=None, increasing=True) [source]
Solve the isotonic regression model. Read more in the User Guide. Parameters
yarray-like of shape (n_samples,)
The data.
sample_weightarray-like of shape (n_samples,), default=None
Weights on each point of the regression. If None, weight is set to 1 (equal weights).
y_minfloat, default=None
Lower bound on the lowest predicted value (the minimum value may still be higher). If not set, defaults to -inf.
y_maxfloat, default=None
Upper bound on the highest predicted value (the maximum may still be lower). If not set, defaults to +inf.
increasingbool, default=True
Whether to compute y_ is increasing (if set to True) or decreasing (if set to False) Returns
y_list of floats
Isotonic fit of y. References “Active set algorithms for isotonic regression; A unifying framework” by Michael J. Best and Nilotpal Chakravarti, section 3. | sklearn.modules.generated.sklearn.isotonic.isotonic_regression |
sklearn.linear_model.enet_path
sklearn.linear_model.enet_path(X, y, *, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, check_input=True, **params) [source]
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is: 1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values.
l1_ratiofloat, default=0.5
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
n_alphasint, default=100
Number of alphas along the regularization path.
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically.
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
Whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
check_inputbool, default=True
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
**paramskwargs
Keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True). See also
MultiTaskElasticNet
MultiTaskElasticNetCV
ElasticNet
ElasticNetCV
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py.
Examples using sklearn.linear_model.enet_path
Lasso and Elastic Net | sklearn.modules.generated.sklearn.linear_model.enet_path |
sklearn.linear_model.lars_path
sklearn.linear_model.lars_path(X, y, Xy=None, *, Gram=None, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source]
Compute Least Angle Regression or Lasso path using LARS algorithm [1] The optimization objective for the case method=’lasso’ is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lars’, the objective function is only known in the form of an implicit equation (see discussion in [1]) Read more in the User Guide. Parameters
XNone or array-like of shape (n_samples, n_features)
Input data. Note that if X is None then the Gram matrix must be specified, i.e., cannot be None or False.
yNone or array-like of shape (n_samples,)
Input targets.
Xyarray-like of shape (n_samples,) or (n_samples, n_targets), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
GramNone, ‘auto’, array-like of shape (n_features, n_features), default=None
Precomputed Gram matrix (X’ * X), if 'auto', the Gram matrix is precomputed from the given X, if there are more samples than features.
max_iterint, default=500
Maximum number of iterations to perform, set to infinity for no limit.
alpha_minfloat, default=0
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
method{‘lar’, ‘lasso’}, default=’lar’
Specifies the returned model. Select 'lar' for Least Angle Regression, 'lasso' for the Lasso.
copy_Xbool, default=True
If False, X is overwritten.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Grambool, default=True
If False, Gram is overwritten.
verboseint, default=0
Controls output verbosity.
return_pathbool, default=True
If return_path==True returns the entire path, else returns only the last point of the path.
return_n_iterbool, default=False
Whether to return the number of iterations.
positivebool, default=False
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function. Returns
alphasarray-like of shape (n_alphas + 1,)
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller.
activearray-like of shape (n_alphas,)
Indices of active variables at the end of the path.
coefsarray-like of shape (n_features, n_alphas + 1)
Coefficients along the path
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
lars_path_gram
lasso_path
lasso_path_gram
LassoLars
Lars
LassoLarsCV
LarsCV
sklearn.decomposition.sparse_encode
References
1
“Least Angle Regression”, Efron et al. http://statweb.stanford.edu/~tibs/ftp/lars.pdf
2
Wikipedia entry on the Least-angle regression
3
Wikipedia entry on the Lasso
Examples using sklearn.linear_model.lars_path
Lasso path using LARS | sklearn.modules.generated.sklearn.linear_model.lars_path |
sklearn.linear_model.lars_path_gram
sklearn.linear_model.lars_path_gram(Xy, Gram, *, n_samples, max_iter=500, alpha_min=0, method='lar', copy_X=True, eps=2.220446049250313e-16, copy_Gram=True, verbose=0, return_path=True, return_n_iter=False, positive=False) [source]
lars_path in the sufficient stats mode [1] The optimization objective for the case method=’lasso’ is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method=’lars’, the objective function is only known in the form of an implicit equation (see discussion in [1]) Read more in the User Guide. Parameters
Xyarray-like of shape (n_samples,) or (n_samples, n_targets)
Xy = np.dot(X.T, y).
Gramarray-like of shape (n_features, n_features)
Gram = np.dot(X.T * X).
n_samplesint or float
Equivalent size of sample.
max_iterint, default=500
Maximum number of iterations to perform, set to infinity for no limit.
alpha_minfloat, default=0
Minimum correlation along the path. It corresponds to the regularization parameter alpha parameter in the Lasso.
method{‘lar’, ‘lasso’}, default=’lar’
Specifies the returned model. Select 'lar' for Least Angle Regression, 'lasso' for the Lasso.
copy_Xbool, default=True
If False, X is overwritten.
epsfloat, default=np.finfo(float).eps
The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
copy_Grambool, default=True
If False, Gram is overwritten.
verboseint, default=0
Controls output verbosity.
return_pathbool, default=True
If return_path==True returns the entire path, else returns only the last point of the path.
return_n_iterbool, default=False
Whether to return the number of iterations.
positivebool, default=False
Restrict coefficients to be >= 0. This option is only allowed with method ‘lasso’. Note that the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent lasso_path function. Returns
alphasarray-like of shape (n_alphas + 1,)
Maximum of covariances (in absolute value) at each iteration. n_alphas is either max_iter, n_features or the number of nodes in the path with alpha >= alpha_min, whichever is smaller.
activearray-like of shape (n_alphas,)
Indices of active variables at the end of the path.
coefsarray-like of shape (n_features, n_alphas + 1)
Coefficients along the path
n_iterint
Number of iterations run. Returned only if return_n_iter is set to True. See also
lars_path
lasso_path
lasso_path_gram
LassoLars
Lars
LassoLarsCV
LarsCV
sklearn.decomposition.sparse_encode
References
1
“Least Angle Regression”, Efron et al. http://statweb.stanford.edu/~tibs/ftp/lars.pdf
2
Wikipedia entry on the Least-angle regression
3
Wikipedia entry on the Lasso | sklearn.modules.generated.sklearn.linear_model.lars_path_gram |
sklearn.linear_model.lasso_path
sklearn.linear_model.lasso_path(X, y, *, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, **params) [source]
Compute Lasso path with coordinate descent The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
For multi-output tasks it is: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21
Where: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row. Read more in the User Guide. Parameters
X{array-like, sparse matrix} of shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs)
Target values
epsfloat, default=1e-3
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3
n_alphasint, default=100
Number of alphas along the regularization path
alphasndarray, default=None
List of alphas where to compute the models. If None alphas are set automatically
precompute‘auto’, bool or array-like of shape (n_features, n_features), default=’auto’
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xyarray-like of shape (n_features,) or (n_features, n_outputs), default=None
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
copy_Xbool, default=True
If True, X will be copied; else, it may be overwritten.
coef_initndarray of shape (n_features, ), default=None
The initial values of the coefficients.
verbosebool or int, default=False
Amount of verbosity.
return_n_iterbool, default=False
whether to return the number of iterations or not.
positivebool, default=False
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
**paramskwargs
keyword arguments passed to the coordinate descent solver. Returns
alphasndarray of shape (n_alphas,)
The alphas along the path where models are computed.
coefsndarray of shape (n_features, n_alphas) or (n_outputs, n_features, n_alphas)
Coefficients along the path.
dual_gapsndarray of shape (n_alphas,)
The dual gaps at the end of the optimization for each alpha.
n_iterslist of int
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See also
lars_path
Lasso
LassoLars
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
Notes For an example, see examples/linear_model/plot_lasso_coordinate_descent_path.py. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples Comparing lasso_path and lars_path with interpolation: >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T
>>> y = np.array([1, 2, 3.1])
>>> # Use lasso_path to compute a coefficient path
>>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5])
>>> print(coef_path)
[[0. 0. 0.46874778]
[0.2159048 0.4425765 0.23689075]]
>>> # Now use lars_path and 1D linear interpolation to compute the
>>> # same path
>>> from sklearn.linear_model import lars_path
>>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso')
>>> from scipy import interpolate
>>> coef_path_continuous = interpolate.interp1d(alphas[::-1],
... coef_path_lars[:, ::-1])
>>> print(coef_path_continuous([5., 1., .5]))
[[0. 0. 0.46915237]
[0.2159048 0.4425765 0.23668876]]
Examples using sklearn.linear_model.lasso_path
Lasso and Elastic Net | sklearn.modules.generated.sklearn.linear_model.lasso_path |
sklearn.linear_model.orthogonal_mp
sklearn.linear_model.orthogonal_mp(X, y, *, n_nonzero_coefs=None, tol=None, precompute=False, copy_X=True, return_path=False, return_n_iter=False) [source]
Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems. An instance of the problem has the form: When parametrized by the number of non-zero coefficients using n_nonzero_coefs: argmin ||y - Xgamma||^2 subject to ||gamma||_0 <= n_{nonzero coefs} When parametrized by error using the parameter tol: argmin ||gamma||_0 subject to ||y - Xgamma||^2 <= tol Read more in the User Guide. Parameters
Xndarray of shape (n_samples, n_features)
Input data. Columns are assumed to have unit norm.
yndarray of shape (n_samples,) or (n_samples, n_targets)
Input targets.
n_nonzero_coefsint, default=None
Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features.
tolfloat, default=None
Maximum norm of the residual. If not None, overrides n_nonzero_coefs.
precompute‘auto’ or bool, default=False
Whether to perform precomputations. Improves performance when n_targets or n_samples is very large.
copy_Xbool, default=True
Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway.
return_pathbool, default=False
Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
coefndarray of shape (n_features,) or (n_features, n_targets)
Coefficients of the OMP solution. If return_path=True, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis yields coefficients in increasing order of active features.
n_itersarray-like or int
Number of active features across every target. Returned only if return_n_iter is set to True. See also
OrthogonalMatchingPursuit
orthogonal_mp_gram
lars_path
sklearn.decomposition.sparse_encode
Notes Orthogonal matching pursuit was introduced in S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf | sklearn.modules.generated.sklearn.linear_model.orthogonal_mp |
sklearn.linear_model.orthogonal_mp_gram
sklearn.linear_model.orthogonal_mp_gram(Gram, Xy, *, n_nonzero_coefs=None, tol=None, norms_squared=None, copy_Gram=True, copy_Xy=True, return_path=False, return_n_iter=False) [source]
Gram Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T * X and the product X.T * y. Read more in the User Guide. Parameters
Gramndarray of shape (n_features, n_features)
Gram matrix of the input data: X.T * X.
Xyndarray of shape (n_features,) or (n_features, n_targets)
Input targets multiplied by X: X.T * y.
n_nonzero_coefsint, default=None
Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features.
tolfloat, default=None
Maximum norm of the residual. If not None, overrides n_nonzero_coefs.
norms_squaredarray-like of shape (n_targets,), default=None
Squared L2 norms of the lines of y. Required if tol is not None.
copy_Grambool, default=True
Whether the gram matrix must be copied by the algorithm. A false value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway.
copy_Xybool, default=True
Whether the covariance vector Xy must be copied by the algorithm. If False, it may be overwritten.
return_pathbool, default=False
Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
coefndarray of shape (n_features,) or (n_features, n_targets)
Coefficients of the OMP solution. If return_path=True, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis yields coefficients in increasing order of active features.
n_itersarray-like or int
Number of active features across every target. Returned only if return_n_iter is set to True. See also
OrthogonalMatchingPursuit
orthogonal_mp
lars_path
sklearn.decomposition.sparse_encode
Notes Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf | sklearn.modules.generated.sklearn.linear_model.orthogonal_mp_gram |
sklearn.linear_model.PassiveAggressiveRegressor
sklearn.linear_model.PassiveAggressiveRegressor(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False) [source]
Passive Aggressive Regressor Read more in the User Guide. Parameters
Cfloat, default=1.0
Maximum step size (regularization). Defaults to 1.0.
fit_interceptbool, default=True
Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True.
max_iterint, default=1000
The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method. New in version 0.19.
tolfloat or None, default=1e-3
The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). New in version 0.19.
early_stoppingbool, default=False
Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. New in version 0.20.
validation_fractionfloat, default=0.1
The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True. New in version 0.20.
n_iter_no_changeint, default=5
Number of iterations with no improvement to wait before early stopping. New in version 0.20.
shufflebool, default=True
Whether or not the training data should be shuffled after each epoch.
verboseinteger, default=0
The verbosity level
lossstring, default=”epsilon_insensitive”
The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper.
epsilonfloat, default=DEFAULT_EPSILON
If the difference between the current prediction and the correct label is below this threshold, the model is not updated.
random_stateint, RandomState instance, default=None
Used to shuffle the training data, when shuffle is set to True. Pass an int for reproducible output across multiple function calls. See Glossary.
warm_startbool, default=False
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. Repeatedly calling fit or partial_fit when warm_start is True can result in a different solution than when calling fit a single time because of the way the data is shuffled.
averagebool or int, default=False
When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute. If set to an int greater than 1, averaging will begin once the total number of samples seen reaches average. So average=10 will begin averaging after seeing 10 samples. New in version 0.19: parameter average to use weights averaging in SGD Attributes
coef_array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features]
Weights assigned to the features.
intercept_array, shape = [1] if n_classes == 2 else [n_classes]
Constants in decision function.
n_iter_int
The actual number of iterations to reach the stopping criterion.
t_int
Number of weight updates performed during training. Same as (n_iter_ * n_samples). See also
SGDRegressor
References Online Passive-Aggressive Algorithms <http://jmlr.csail.mit.edu/papers/volume7/crammer06a/crammer06a.pdf> K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR (2006) Examples >>> from sklearn.linear_model import PassiveAggressiveRegressor
>>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=4, random_state=0)
>>> regr = PassiveAggressiveRegressor(max_iter=100, random_state=0,
... tol=1e-3)
>>> regr.fit(X, y)
PassiveAggressiveRegressor(max_iter=100, random_state=0)
>>> print(regr.coef_)
[20.48736655 34.18818427 67.59122734 87.94731329]
>>> print(regr.intercept_)
[-0.02306214]
>>> print(regr.predict([[0, 0, 0, 0]]))
[-0.02306214] | sklearn.modules.generated.sklearn.linear_model.passiveaggressiveregressor |
sklearn.linear_model.ridge_regression
sklearn.linear_model.ridge_regression(X, y, alpha, *, sample_weight=None, solver='auto', max_iter=None, tol=0.001, verbose=0, random_state=None, return_n_iter=False, return_intercept=False, check_input=True) [source]
Solve the ridge equation by the method of normal equations. Read more in the User Guide. Parameters
X{ndarray, sparse matrix, LinearOperator} of shape (n_samples, n_features)
Training data
yndarray of shape (n_samples,) or (n_samples, n_targets)
Target values
alphafloat or array-like of shape (n_targets,)
Regularization strength; must be a positive float. Regularization improves the conditioning of the problem and reduces the variance of the estimates. Larger values specify stronger regularization. Alpha corresponds to 1 / (2C) in other linear models such as LogisticRegression or LinearSVC. If an array is passed, penalties are assumed to be specific to the targets. Hence they must correspond in number.
sample_weightfloat or array-like of shape (n_samples,), default=None
Individual weights for each sample. If given a float, every sample will have the same weight. If sample_weight is not None and solver=’auto’, the solver will be set to ‘cholesky’. New in version 0.17.
solver{‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’}, default=’auto’
Solver to use in the computational routines: ‘auto’ chooses the solver automatically based on the type of data. ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution via a Cholesky decomposition of dot(X.T, X) ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. All last five solvers support both dense and sparse data. However, only ‘sag’ and ‘sparse_cg’ supports sparse input when fit_intercept is True. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver.
max_iterint, default=None
Maximum number of iterations for conjugate gradient solver. For the ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ and saga solver, the default value is 1000.
tolfloat, default=1e-3
Precision of the solution.
verboseint, default=0
Verbosity level. Setting verbose > 0 will display additional information depending on the solver used.
random_stateint, RandomState instance, default=None
Used when solver == ‘sag’ or ‘saga’ to shuffle the data. See Glossary for details.
return_n_iterbool, default=False
If True, the method also returns n_iter, the actual number of iteration performed by the solver. New in version 0.17.
return_interceptbool, default=False
If True and if X is sparse, the method also returns the intercept, and the solver is automatically changed to ‘sag’. This is only a temporary fix for fitting the intercept with sparse data. For dense data, use sklearn.linear_model._preprocess_data before your regression. New in version 0.17.
check_inputbool, default=True
If False, the input arrays X and y will not be checked. New in version 0.21. Returns
coefndarray of shape (n_features,) or (n_targets, n_features)
Weight vector(s).
n_iterint, optional
The actual number of iteration performed by the solver. Only returned if return_n_iter is True.
interceptfloat or ndarray of shape (n_targets,)
The intercept of the model. Only returned if return_intercept is True and if X is a scipy sparse array. Notes This function won’t compute the intercept. | sklearn.modules.generated.sklearn.linear_model.ridge_regression |
sklearn.manifold.locally_linear_embedding
sklearn.manifold.locally_linear_embedding(X, *, n_neighbors, n_components, reg=0.001, eigen_solver='auto', tol=1e-06, max_iter=100, method='standard', hessian_tol=0.0001, modified_tol=1e-12, random_state=None, n_jobs=None) [source]
Perform a Locally Linear Embedding analysis on the data. Read more in the User Guide. Parameters
X{array-like, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a numpy array or a NearestNeighbors object.
n_neighborsint
number of neighbors to consider for each point.
n_componentsint
number of coordinates for the manifold.
regfloat, default=1e-3
regularization constant, multiplies the trace of the local covariance matrix of the distances.
eigen_solver{‘auto’, ‘arpack’, ‘dense’}, default=’auto’
auto : algorithm will attempt to choose the best method for input data
arpackuse arnoldi iteration in shift-invert mode.
For this method, M may be a dense matrix, sparse matrix, or general linear operator. Warning: ARPACK can be unstable for some problems. It is best to try several random seeds in order to check results.
denseuse standard dense matrix operations for the eigenvalue
decomposition. For this method, M must be an array or matrix type. This method should be avoided for large problems.
tolfloat, default=1e-6
Tolerance for ‘arpack’ method Not used if eigen_solver==’dense’.
max_iterint, default=100
maximum number of iterations for the arpack solver.
method{‘standard’, ‘hessian’, ‘modified’, ‘ltsa’}, default=’standard’
standarduse the standard locally linear embedding algorithm.
see reference [1]
hessianuse the Hessian eigenmap method. This method requires
n_neighbors > n_components * (1 + (n_components + 1) / 2. see reference [2]
modifieduse the modified locally linear embedding algorithm.
see reference [3]
ltsause local tangent space alignment algorithm
see reference [4]
hessian_tolfloat, default=1e-4
Tolerance for Hessian eigenmapping method. Only used if method == ‘hessian’
modified_tolfloat, default=1e-12
Tolerance for modified LLE method. Only used if method == ‘modified’
random_stateint, RandomState instance, default=None
Determines the random number generator when solver == ‘arpack’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>.
n_jobsint or None, default=None
The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Returns
Yarray-like, shape [n_samples, n_components]
Embedding vectors.
squared_errorfloat
Reconstruction error for the embedding vectors. Equivalent to norm(Y - W Y, 'fro')**2, where W are the reconstruction weights. References
1
Roweis, S. & Saul, L. Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323 (2000).
2
Donoho, D. & Grimes, C. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci U S A. 100:5591 (2003).
3
Zhang, Z. & Wang, J. MLLE: Modified Locally Linear Embedding Using Multiple Weights. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.382
4
Zhang, Z. & Zha, H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal of Shanghai Univ. 8:406 (2004)
Examples using sklearn.manifold.locally_linear_embedding
Swiss Roll reduction with LLE | sklearn.modules.generated.sklearn.manifold.locally_linear_embedding |
sklearn.manifold.smacof
sklearn.manifold.smacof(dissimilarities, *, metric=True, n_components=2, init=None, n_init=8, n_jobs=None, max_iter=300, verbose=0, eps=0.001, random_state=None, return_n_iter=False) [source]
Computes multidimensional scaling using the SMACOF algorithm. The SMACOF (Scaling by MAjorizing a COmplicated Function) algorithm is a multidimensional scaling algorithm which minimizes an objective function (the stress) using a majorization technique. Stress majorization, also known as the Guttman Transform, guarantees a monotone convergence of stress, and is more powerful than traditional techniques such as gradient descent. The SMACOF algorithm for metric MDS can summarized by the following steps: Set an initial start configuration, randomly or not. Compute the stress Compute the Guttman Transform Iterate 2 and 3 until convergence. The nonmetric algorithm adds a monotonic regression step before computing the stress. Parameters
dissimilaritiesndarray of shape (n_samples, n_samples)
Pairwise dissimilarities between the points. Must be symmetric.
metricbool, default=True
Compute metric or nonmetric SMACOF algorithm.
n_componentsint, default=2
Number of dimensions in which to immerse the dissimilarities. If an init array is provided, this option is overridden and the shape of init is used to determine the dimensionality of the embedding space.
initndarray of shape (n_samples, n_components), default=None
Starting configuration of the embedding to initialize the algorithm. By default, the algorithm is initialized with a randomly chosen array.
n_initint, default=8
Number of times the SMACOF algorithm will be run with different initializations. The final results will be the best output of the runs, determined by the run with the smallest final stress. If init is provided, this option is overridden and a single run is performed.
n_jobsint, default=None
The number of jobs to use for the computation. If multiple initializations are used (n_init), each run of the algorithm is computed in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
max_iterint, default=300
Maximum number of iterations of the SMACOF algorithm for a single run.
verboseint, default=0
Level of verbosity.
epsfloat, default=1e-3
Relative tolerance with respect to stress at which to declare convergence.
random_stateint, RandomState instance or None, default=None
Determines the random number generator used to initialize the centers. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>.
return_n_iterbool, default=False
Whether or not to return the number of iterations. Returns
Xndarray of shape (n_samples, n_components)
Coordinates of the points in a n_components-space.
stressfloat
The final value of the stress (sum of squared distance of the disparities and the distances for all constrained points).
n_iterint
The number of iterations corresponding to the best stress. Returned only if return_n_iter is set to True. Notes “Modern Multidimensional Scaling - Theory and Applications” Borg, I.; Groenen P. Springer Series in Statistics (1997) “Nonmetric multidimensional scaling: a numerical method” Kruskal, J. Psychometrika, 29 (1964) “Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis” Kruskal, J. Psychometrika, 29, (1964) | sklearn.modules.generated.sklearn.manifold.smacof |
sklearn.manifold.spectral_embedding
sklearn.manifold.spectral_embedding(adjacency, *, n_components=8, eigen_solver=None, random_state=None, eigen_tol=0.0, norm_laplacian=True, drop_first=True) [source]
Project the sample on the first eigenvectors of the graph Laplacian. The adjacency matrix is used to compute a normalized graph Laplacian whose spectrum (especially the eigenvectors associated to the smallest eigenvalues) has an interpretation in terms of minimal number of cuts necessary to split the graph into comparably sized components. This embedding can also ‘work’ even if the adjacency variable is not strictly the adjacency matrix of a graph but more generally an affinity or similarity matrix between samples (for instance the heat kernel of a euclidean distance matrix or a k-NN matrix). However care must taken to always make the affinity matrix symmetric so that the eigenvector decomposition works as expected. Note : Laplacian Eigenmaps is the actual algorithm implemented here. Read more in the User Guide. Parameters
adjacency{array-like, sparse graph} of shape (n_samples, n_samples)
The adjacency matrix of the graph to embed.
n_componentsint, default=8
The dimension of the projection subspace.
eigen_solver{‘arpack’, ‘lobpcg’, ‘amg’}, default=None
The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used.
random_stateint, RandomState instance or None, default=None
Determines the random number generator used for the initialization of the lobpcg eigenvectors decomposition when solver == ‘amg’. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>.
eigen_tolfloat, default=0.0
Stopping criterion for eigendecomposition of the Laplacian matrix when using arpack eigen_solver.
norm_laplacianbool, default=True
If True, then compute normalized Laplacian.
drop_firstbool, default=True
Whether to drop the first eigenvector. For spectral embedding, this should be True as the first eigenvector should be constant vector for connected graph, but for spectral clustering, this should be kept as False to retain the first eigenvector. Returns
embeddingndarray of shape (n_samples, n_components)
The reduced samples. Notes Spectral Embedding (Laplacian Eigenmaps) is most useful when the graph has one connected component. If there graph has many components, the first few eigenvectors will simply uncover the connected components of the graph. References https://en.wikipedia.org/wiki/LOBPCG Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method Andrew V. Knyazev https://doi.org/10.1137%2FS1064827500366124 | sklearn.modules.generated.sklearn.manifold.spectral_embedding |
sklearn.manifold.trustworthiness
sklearn.manifold.trustworthiness(X, X_embedded, *, n_neighbors=5, metric='euclidean') [source]
Expresses to what extent the local structure is retained. The trustworthiness is within [0, 1]. It is defined as \[T(k) = 1 - \frac{2}{nk (2n - 3k - 1)} \sum^n_{i=1} \sum_{j \in \mathcal{N}_{i}^{k}} \max(0, (r(i, j) - k))\] where for each sample i, \(\mathcal{N}_{i}^{k}\) are its k nearest neighbors in the output space, and every sample j is its \(r(i, j)\)-th nearest neighbor in the input space. In other words, any unexpected nearest neighbors in the output space are penalised in proportion to their rank in the input space. “Neighborhood Preservation in Nonlinear Projection Methods: An Experimental Study” J. Venna, S. Kaski “Learning a Parametric Embedding by Preserving Local Structure” L.J.P. van der Maaten Parameters
Xndarray of shape (n_samples, n_features) or (n_samples, n_samples)
If the metric is ‘precomputed’ X must be a square distance matrix. Otherwise it contains a sample per row.
X_embeddedndarray of shape (n_samples, n_components)
Embedding of the training data in low-dimensional space.
n_neighborsint, default=5
Number of neighbors k that will be considered.
metricstr or callable, default=’euclidean’
Which metric to use for computing pairwise distances between samples from the original input space. If metric is ‘precomputed’, X must be a matrix of pairwise distances or squared distances. Otherwise, see the documentation of argument metric in sklearn.pairwise.pairwise_distances for a list of available metrics. New in version 0.20. Returns
trustworthinessfloat
Trustworthiness of the low-dimensional embedding. | sklearn.modules.generated.sklearn.manifold.trustworthiness |
sklearn.metrics.accuracy_score
sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None) [source]
Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. Read more in the User Guide. Parameters
y_true1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels.
y_pred1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier.
normalizebool, default=True
If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.
sample_weightarray-like of shape (n_samples,), default=None
Sample weights. Returns
scorefloat
If normalize == True, return the fraction of correctly classified samples (float), else returns the number of correctly classified samples (int). The best performance is 1 with normalize == True and the number of samples with normalize == False. See also
jaccard_score, hamming_loss, zero_one_loss
Notes In binary and multiclass classification, this function is equal to the jaccard_score function. Examples >>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2
In the multilabel case with binary label indicators: >>> import numpy as np
>>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5
Examples using sklearn.metrics.accuracy_score
Plot classification probability
Multi-class AdaBoosted Decision Trees
Probabilistic predictions with Gaussian process classification (GPC)
Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV
Importance of Feature Scaling
Effect of varying threshold for self-training
Classification of text documents using sparse features | sklearn.modules.generated.sklearn.metrics.accuracy_score |
sklearn.metrics.adjusted_mutual_info_score
sklearn.metrics.adjusted_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source]
Adjusted Mutual Information between two clusterings. Adjusted Mutual Information (AMI) is an adjustment of the Mutual Information (MI) score to account for chance. It accounts for the fact that the MI is generally higher for two clusterings with a larger number of clusters, regardless of whether there is actually more information shared. For two clusterings \(U\) and \(V\), the AMI is given as: AMI(U, V) = [MI(U, V) - E(MI(U, V))] / [avg(H(U), H(V)) - E(MI(U, V))]
This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is furthermore symmetric: switching label_true with label_pred will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known. Be mindful that this function is an order of magnitude slower than other metrics, such as the Adjusted Rand Index. Read more in the User Guide. Parameters
labels_trueint array, shape = [n_samples]
A clustering of the data into disjoint subsets.
labels_predint array-like of shape (n_samples,)
A clustering of the data into disjoint subsets.
average_methodstr, default=’arithmetic’
How to compute the normalizer in the denominator. Possible options are ‘min’, ‘geometric’, ‘arithmetic’, and ‘max’. New in version 0.20. Changed in version 0.22: The default value of average_method changed from ‘max’ to ‘arithmetic’. Returns
ami: float (upperlimited by 1.0)
The AMI returns a value of 1 when the two partitions are identical (ie perfectly matched). Random partitions (independent labellings) have an expected AMI around 0 on average hence can be negative. See also
adjusted_rand_score
Adjusted Rand Index.
mutual_info_score
Mutual Information (not adjusted for chance). References
1
Vinh, Epps, and Bailey, (2010). Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance, JMLR
2
Wikipedia entry for the Adjusted Mutual Information Examples Perfect labelings are both homogeneous and complete, hence have score 1.0: >>> from sklearn.metrics.cluster import adjusted_mutual_info_score
>>> adjusted_mutual_info_score([0, 0, 1, 1], [0, 0, 1, 1])
...
1.0
>>> adjusted_mutual_info_score([0, 0, 1, 1], [1, 1, 0, 0])
...
1.0
If classes members are completely split across different clusters, the assignment is totally in-complete, hence the AMI is null: >>> adjusted_mutual_info_score([0, 0, 0, 0], [0, 1, 2, 3])
...
0.0
Examples using sklearn.metrics.adjusted_mutual_info_score
Demo of affinity propagation clustering algorithm
Demo of DBSCAN clustering algorithm
Adjustment for chance in clustering performance evaluation
A demo of K-Means clustering on the handwritten digits data | sklearn.modules.generated.sklearn.metrics.adjusted_mutual_info_score |
sklearn.metrics.adjusted_rand_score
sklearn.metrics.adjusted_rand_score(labels_true, labels_pred) [source]
Rand index adjusted for chance. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. The raw RI score is then “adjusted for chance” into the ARI score using the following scheme: ARI = (RI - Expected_RI) / (max(RI) - Expected_RI)
The adjusted Rand index is thus ensured to have a value close to 0.0 for random labeling independently of the number of clusters and samples and exactly 1.0 when the clusterings are identical (up to a permutation). ARI is a symmetric measure: adjusted_rand_score(a, b) == adjusted_rand_score(b, a)
Read more in the User Guide. Parameters
labels_trueint array, shape = [n_samples]
Ground truth class labels to be used as a reference
labels_predarray-like of shape (n_samples,)
Cluster labels to evaluate Returns
ARIfloat
Similarity score between -1.0 and 1.0. Random labelings have an ARI close to 0.0. 1.0 stands for perfect match. See also
adjusted_mutual_info_score
Adjusted Mutual Information. References
Hubert1985
L. Hubert and P. Arabie, Comparing Partitions, Journal of Classification 1985 https://link.springer.com/article/10.1007%2FBF01908075
Steinley2004
D. Steinley, Properties of the Hubert-Arabie adjusted Rand index, Psychological Methods 2004
wk
https://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index Examples Perfectly matching labelings have a score of 1 even >>> from sklearn.metrics.cluster import adjusted_rand_score
>>> adjusted_rand_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> adjusted_rand_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
Labelings that assign all classes members to the same clusters are complete but may not always be pure, hence penalized: >>> adjusted_rand_score([0, 0, 1, 2], [0, 0, 1, 1])
0.57...
ARI is symmetric, so labelings that have pure clusters with members coming from the same classes but unnecessary splits are penalized: >>> adjusted_rand_score([0, 0, 1, 1], [0, 0, 1, 2])
0.57...
If classes members are completely split across different clusters, the assignment is totally incomplete, hence the ARI is very low: >>> adjusted_rand_score([0, 0, 0, 0], [0, 1, 2, 3])
0.0
Examples using sklearn.metrics.adjusted_rand_score
Demo of affinity propagation clustering algorithm
Demo of DBSCAN clustering algorithm
Adjustment for chance in clustering performance evaluation
A demo of K-Means clustering on the handwritten digits data
Clustering text documents using k-means | sklearn.modules.generated.sklearn.metrics.adjusted_rand_score |
sklearn.metrics.auc
sklearn.metrics.auc(x, y) [source]
Compute Area Under the Curve (AUC) using the trapezoidal rule. This is a general function, given points on a curve. For computing the area under the ROC-curve, see roc_auc_score. For an alternative way to summarize a precision-recall curve, see average_precision_score. Parameters
xndarray of shape (n,)
x coordinates. These must be either monotonic increasing or monotonic decreasing.
yndarray of shape, (n,)
y coordinates. Returns
aucfloat
See also
roc_auc_score
Compute the area under the ROC curve.
average_precision_score
Compute average precision from prediction scores.
precision_recall_curve
Compute precision-recall pairs for different probability thresholds. Examples >>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> pred = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=2)
>>> metrics.auc(fpr, tpr)
0.75
Examples using sklearn.metrics.auc
Species distribution modeling
Poisson regression and non-normal loss
Tweedie regression on insurance claims
Receiver Operating Characteristic (ROC) with cross validation
Receiver Operating Characteristic (ROC)
Precision-Recall | sklearn.modules.generated.sklearn.metrics.auc |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.