doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
class sklearn.neighbors.NearestCentroid(metric='euclidean', *, shrink_threshold=None) [source] Nearest centroid classifier. Each class is represented by its centroid, with test samples classified to the class with the nearest centroid. Read more in the User Guide. Parameters metricstr or callable The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances (according to the metric) of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean. Changed in version 0.19: metric='precomputed' was deprecated and now raises an error shrink_thresholdfloat, default=None Threshold for shrinking centroids to remove features. Attributes centroids_array-like of shape (n_classes, n_features) Centroid of each class. classes_array of shape (n_classes,) The unique classes labels. See also KNeighborsClassifier Nearest neighbors classifier. Notes When used for text classification with tf-idf vectors, this classifier is also known as the Rocchio classifier. References Tibshirani, R., Hastie, T., Narasimhan, B., & Chu, G. (2002). Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences of the United States of America, 99(10), 6567-6572. The National Academy of Sciences. Examples >>> from sklearn.neighbors import NearestCentroid >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> clf = NearestCentroid() >>> clf.fit(X, y) NearestCentroid() >>> print(clf.predict([[-0.8, -1]])) [1] Methods fit(X, y) Fit the NearestCentroid model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Perform classification on an array of test vectors X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the NearestCentroid model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. Note that centroid shrinking cannot be used with sparse matrices. yarray-like of shape (n_samples,) Target values (integers) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Perform classification on an array of test vectors X. The predicted class C for each sample in X is returned. Parameters Xarray-like of shape (n_samples, n_features) Returns Cndarray of shape (n_samples,) Notes If the metric constructor parameter is “precomputed”, X is assumed to be the distance matrix between the data to be predicted and self.centroids_. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid
sklearn.neighbors.NearestCentroid class sklearn.neighbors.NearestCentroid(metric='euclidean', *, shrink_threshold=None) [source] Nearest centroid classifier. Each class is represented by its centroid, with test samples classified to the class with the nearest centroid. Read more in the User Guide. Parameters metricstr or callable The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by metrics.pairwise.pairwise_distances for its metric parameter. The centroids for the samples corresponding to each class is the point from which the sum of the distances (according to the metric) of all samples that belong to that particular class are minimized. If the “manhattan” metric is provided, this centroid is the median and for all other metrics, the centroid is now set to be the mean. Changed in version 0.19: metric='precomputed' was deprecated and now raises an error shrink_thresholdfloat, default=None Threshold for shrinking centroids to remove features. Attributes centroids_array-like of shape (n_classes, n_features) Centroid of each class. classes_array of shape (n_classes,) The unique classes labels. See also KNeighborsClassifier Nearest neighbors classifier. Notes When used for text classification with tf-idf vectors, this classifier is also known as the Rocchio classifier. References Tibshirani, R., Hastie, T., Narasimhan, B., & Chu, G. (2002). Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences of the United States of America, 99(10), 6567-6572. The National Academy of Sciences. Examples >>> from sklearn.neighbors import NearestCentroid >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> clf = NearestCentroid() >>> clf.fit(X, y) NearestCentroid() >>> print(clf.predict([[-0.8, -1]])) [1] Methods fit(X, y) Fit the NearestCentroid model according to the given training data. get_params([deep]) Get parameters for this estimator. predict(X) Perform classification on an array of test vectors X. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the NearestCentroid model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. Note that centroid shrinking cannot be used with sparse matrices. yarray-like of shape (n_samples,) Target values (integers) get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Perform classification on an array of test vectors X. The predicted class C for each sample in X is returned. Parameters Xarray-like of shape (n_samples, n_features) Returns Cndarray of shape (n_samples,) Notes If the metric constructor parameter is “precomputed”, X is assumed to be the distance matrix between the data to be predicted and self.centroids_. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.neighbors.NearestCentroid Nearest Centroid Classification Classification of text documents using sparse features
sklearn.modules.generated.sklearn.neighbors.nearestcentroid
fit(X, y) [source] Fit the NearestCentroid model according to the given training data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. Note that centroid shrinking cannot be used with sparse matrices. yarray-like of shape (n_samples,) Target values (integers)
sklearn.modules.generated.sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid.get_params
predict(X) [source] Perform classification on an array of test vectors X. The predicted class C for each sample in X is returned. Parameters Xarray-like of shape (n_samples, n_features) Returns Cndarray of shape (n_samples,) Notes If the metric constructor parameter is “precomputed”, X is assumed to be the distance matrix between the data to be predicted and self.centroids_.
sklearn.modules.generated.sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid.predict
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.nearestcentroid#sklearn.neighbors.NearestCentroid.set_params
class sklearn.neighbors.NearestNeighbors(*, n_neighbors=5, radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=None) [source] Unsupervised learner for implementing neighbor searches. Read more in the User Guide. New in version 0.9. Parameters n_neighborsint, default=5 Number of neighbors to use by default for kneighbors queries. radiusfloat, default=1.0 Range of parameter space to use by default for radius_neighbors queries. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstr or callable, default=’minkowski’ the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes effective_metric_str Metric used to compute distances to neighbors. effective_metric_params_dict Parameters for the metric used to compute distances to neighbors. n_samples_fit_int Number of samples in the fitted data. See also KNeighborsClassifier RadiusNeighborsClassifier KNeighborsRegressor RadiusNeighborsRegressor BallTree Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> import numpy as np >>> from sklearn.neighbors import NearestNeighbors >>> samples = [[0, 0, 2], [1, 0, 0], [0, 0, 1]] >>> neigh = NearestNeighbors(n_neighbors=2, radius=0.4) >>> neigh.fit(samples) NearestNeighbors(...) >>> neigh.kneighbors([[0, 0, 1.3]], 2, return_distance=False) array([[2, 0]]...) >>> nbrs = neigh.radius_neighbors( ... [[0, 0, 1.3]], 0.4, return_distance=False ... ) >>> np.asarray(nbrs[0][0]) array(2) Methods fit(X[, y]) Fit the nearest neighbors estimator from the training dataset. get_params([deep]) Get parameters for this estimator. kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point. kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Fit the nearest neighbors estimator from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. yIgnored Not used, present for API consistency by convention. Returns selfNearestNeighbors The fitted nearest neighbors estimator. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. kneighbors(X=None, n_neighbors=None, return_distance=True) [source] Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. n_neighborsint, default=None Number of neighbors required for each sample. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. Returns neigh_distndarray of shape (n_queries, n_neighbors) Array representing the lengths to points, only present if return_distance=True neigh_indndarray of shape (n_queries, n_neighbors) Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=1) >>> neigh.fit(samples) NearestNeighbors(n_neighbors=1) >>> print(neigh.kneighbors([[1., 1., 1.]])) (array([[0.5]]), array([[2]])) As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]] >>> neigh.kneighbors(X, return_distance=False) array([[1], [2]]...) kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source] Computes the (weighted) graph of k-Neighbors for points in X Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features). n_neighborsint, default=None Number of neighbors for each sample. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also NearestNeighbors.radius_neighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=2) >>> neigh.fit(X) NearestNeighbors(n_neighbors=2) >>> A = neigh.kneighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 1.], [1., 0., 1.]]) radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors
sklearn.neighbors.NearestNeighbors class sklearn.neighbors.NearestNeighbors(*, n_neighbors=5, radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=None) [source] Unsupervised learner for implementing neighbor searches. Read more in the User Guide. New in version 0.9. Parameters n_neighborsint, default=5 Number of neighbors to use by default for kneighbors queries. radiusfloat, default=1.0 Range of parameter space to use by default for radius_neighbors queries. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstr or callable, default=’minkowski’ the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes effective_metric_str Metric used to compute distances to neighbors. effective_metric_params_dict Parameters for the metric used to compute distances to neighbors. n_samples_fit_int Number of samples in the fitted data. See also KNeighborsClassifier RadiusNeighborsClassifier KNeighborsRegressor RadiusNeighborsRegressor BallTree Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> import numpy as np >>> from sklearn.neighbors import NearestNeighbors >>> samples = [[0, 0, 2], [1, 0, 0], [0, 0, 1]] >>> neigh = NearestNeighbors(n_neighbors=2, radius=0.4) >>> neigh.fit(samples) NearestNeighbors(...) >>> neigh.kneighbors([[0, 0, 1.3]], 2, return_distance=False) array([[2, 0]]...) >>> nbrs = neigh.radius_neighbors( ... [[0, 0, 1.3]], 0.4, return_distance=False ... ) >>> np.asarray(nbrs[0][0]) array(2) Methods fit(X[, y]) Fit the nearest neighbors estimator from the training dataset. get_params([deep]) Get parameters for this estimator. kneighbors([X, n_neighbors, return_distance]) Finds the K-neighbors of a point. kneighbors_graph([X, n_neighbors, mode]) Computes the (weighted) graph of k-Neighbors for points in X radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Fit the nearest neighbors estimator from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. yIgnored Not used, present for API consistency by convention. Returns selfNearestNeighbors The fitted nearest neighbors estimator. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. kneighbors(X=None, n_neighbors=None, return_distance=True) [source] Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. n_neighborsint, default=None Number of neighbors required for each sample. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. Returns neigh_distndarray of shape (n_queries, n_neighbors) Array representing the lengths to points, only present if return_distance=True neigh_indndarray of shape (n_queries, n_neighbors) Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=1) >>> neigh.fit(samples) NearestNeighbors(n_neighbors=1) >>> print(neigh.kneighbors([[1., 1., 1.]])) (array([[0.5]]), array([[2]])) As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]] >>> neigh.kneighbors(X, return_distance=False) array([[1], [2]]...) kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source] Computes the (weighted) graph of k-Neighbors for points in X Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features). n_neighborsint, default=None Number of neighbors for each sample. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also NearestNeighbors.radius_neighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=2) >>> neigh.fit(X) NearestNeighbors(n_neighbors=2) >>> A = neigh.kneighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 1.], [1., 0., 1.]]) radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.nearestneighbors
fit(X, y=None) [source] Fit the nearest neighbors estimator from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. yIgnored Not used, present for API consistency by convention. Returns selfNearestNeighbors The fitted nearest neighbors estimator.
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.get_params
kneighbors(X=None, n_neighbors=None, return_distance=True) [source] Finds the K-neighbors of a point. Returns indices of and distances to the neighbors of each point. Parameters Xarray-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. n_neighborsint, default=None Number of neighbors required for each sample. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. Returns neigh_distndarray of shape (n_queries, n_neighbors) Array representing the lengths to points, only present if return_distance=True neigh_indndarray of shape (n_queries, n_neighbors) Indices of the nearest points in the population matrix. Examples In the following example, we construct a NearestNeighbors class from an array representing our data set and ask who’s the closest point to [1,1,1] >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=1) >>> neigh.fit(samples) NearestNeighbors(n_neighbors=1) >>> print(neigh.kneighbors([[1., 1., 1.]])) (array([[0.5]]), array([[2]])) As you can see, it returns [[0.5]], and [[2]], which means that the element is at distance 0.5 and is the third element of samples (indexes start at 0). You can also query for multiple points: >>> X = [[0., 1., 0.], [1., 0., 1.]] >>> neigh.kneighbors(X, return_distance=False) array([[1], [2]]...)
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.kneighbors
kneighbors_graph(X=None, n_neighbors=None, mode='connectivity') [source] Computes the (weighted) graph of k-Neighbors for points in X Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. For metric='precomputed' the shape should be (n_queries, n_indexed). Otherwise the shape should be (n_queries, n_features). n_neighborsint, default=None Number of neighbors for each sample. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also NearestNeighbors.radius_neighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(n_neighbors=2) >>> neigh.fit(X) NearestNeighbors(n_neighbors=2) >>> A = neigh.kneighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 1.], [1., 0., 1.]])
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.kneighbors_graph
radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors
radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]])
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.radius_neighbors_graph
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.nearestneighbors#sklearn.neighbors.NearestNeighbors.set_params
class sklearn.neighbors.NeighborhoodComponentsAnalysis(n_components=None, *, init='auto', warm_start=False, max_iter=50, tol=1e-05, callback=None, verbose=0, random_state=None) [source] Neighborhood Components Analysis Neighborhood Component Analysis (NCA) is a machine learning algorithm for metric learning. It learns a linear transformation in a supervised fashion to improve the classification accuracy of a stochastic nearest neighbors rule in the transformed space. Read more in the User Guide. Parameters n_componentsint, default=None Preferred dimensionality of the projected space. If None it will be set to n_features. init{‘auto’, ‘pca’, ‘lda’, ‘identity’, ‘random’} or ndarray of shape (n_features_a, n_features_b), default=’auto’ Initialization of the linear transformation. Possible options are ‘auto’, ‘pca’, ‘lda’, ‘identity’, ‘random’, and a numpy array of shape (n_features_a, n_features_b). ‘auto’ Depending on n_components, the most reasonable initialization will be chosen. If n_components <= n_classes we use ‘lda’, as it uses labels information. If not, but n_components < min(n_features, n_samples), we use ‘pca’, as it projects data in meaningful directions (those of higher variance). Otherwise, we just use ‘identity’. ‘pca’ n_components principal components of the inputs passed to fit will be used to initialize the transformation. (See PCA) ‘lda’ min(n_components, n_classes) most discriminative components of the inputs passed to fit will be used to initialize the transformation. (If n_components > n_classes, the rest of the components will be zero.) (See LinearDiscriminantAnalysis) ‘identity’ If n_components is strictly smaller than the dimensionality of the inputs passed to fit, the identity matrix will be truncated to the first n_components rows. ‘random’ The initial transformation will be a random array of shape (n_components, n_features). Each value is sampled from the standard normal distribution. numpy array n_features_b must match the dimensionality of the inputs passed to fit and n_features_a must be less than or equal to that. If n_components is not None, n_features_a must match it. warm_startbool, default=False If True and fit has been called before, the solution of the previous call to fit is used as the initial linear transformation (n_components and init will be ignored). max_iterint, default=50 Maximum number of iterations in the optimization. tolfloat, default=1e-5 Convergence tolerance for the optimization. callbackcallable, default=None If not None, this function is called after every iteration of the optimizer, taking as arguments the current solution (flattened transformation matrix) and the number of iterations. This might be useful in case one wants to examine or store the transformation found after each iteration. verboseint, default=0 If 0, no progress messages will be printed. If 1, progress messages will be printed to stdout. If > 1, progress messages will be printed and the disp parameter of scipy.optimize.minimize will be set to verbose - 2. random_stateint or numpy.RandomState, default=None A pseudo random number generator object or a seed for it if int. If init='random', random_state is used to initialize the random transformation. If init='pca', random_state is passed as an argument to PCA when initializing the transformation. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes components_ndarray of shape (n_components, n_features) The linear transformation learned during fitting. n_iter_int Counts the number of iterations performed by the optimizer. random_state_numpy.RandomState Pseudo random number generator object used during initialization. References 1 J. Goldberger, G. Hinton, S. Roweis, R. Salakhutdinov. “Neighbourhood Components Analysis”. Advances in Neural Information Processing Systems. 17, 513-520, 2005. http://www.cs.nyu.edu/~roweis/papers/ncanips.pdf 2 Wikipedia entry on Neighborhood Components Analysis https://en.wikipedia.org/wiki/Neighbourhood_components_analysis Examples >>> from sklearn.neighbors import NeighborhoodComponentsAnalysis >>> from sklearn.neighbors import KNeighborsClassifier >>> from sklearn.datasets import load_iris >>> from sklearn.model_selection import train_test_split >>> X, y = load_iris(return_X_y=True) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... stratify=y, test_size=0.7, random_state=42) >>> nca = NeighborhoodComponentsAnalysis(random_state=42) >>> nca.fit(X_train, y_train) NeighborhoodComponentsAnalysis(...) >>> knn = KNeighborsClassifier(n_neighbors=3) >>> knn.fit(X_train, y_train) KNeighborsClassifier(...) >>> print(knn.score(X_test, y_test)) 0.933333... >>> knn.fit(nca.transform(X_train), y_train) KNeighborsClassifier(...) >>> print(knn.score(nca.transform(X_test), y_test)) 0.961904... Methods fit(X, y) Fit the model according to the given training data. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Applies the learned transformation to the given data. fit(X, y) [source] Fit the model according to the given training data. Parameters Xarray-like of shape (n_samples, n_features) The training samples. yarray-like of shape (n_samples,) The corresponding training labels. Returns selfobject returns a trained NeighborhoodComponentsAnalysis model. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Applies the learned transformation to the given data. Parameters Xarray-like of shape (n_samples, n_features) Data samples. Returns X_embedded: ndarray of shape (n_samples, n_components) The data samples transformed. Raises NotFittedError If fit has not been called before.
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis
sklearn.neighbors.NeighborhoodComponentsAnalysis class sklearn.neighbors.NeighborhoodComponentsAnalysis(n_components=None, *, init='auto', warm_start=False, max_iter=50, tol=1e-05, callback=None, verbose=0, random_state=None) [source] Neighborhood Components Analysis Neighborhood Component Analysis (NCA) is a machine learning algorithm for metric learning. It learns a linear transformation in a supervised fashion to improve the classification accuracy of a stochastic nearest neighbors rule in the transformed space. Read more in the User Guide. Parameters n_componentsint, default=None Preferred dimensionality of the projected space. If None it will be set to n_features. init{‘auto’, ‘pca’, ‘lda’, ‘identity’, ‘random’} or ndarray of shape (n_features_a, n_features_b), default=’auto’ Initialization of the linear transformation. Possible options are ‘auto’, ‘pca’, ‘lda’, ‘identity’, ‘random’, and a numpy array of shape (n_features_a, n_features_b). ‘auto’ Depending on n_components, the most reasonable initialization will be chosen. If n_components <= n_classes we use ‘lda’, as it uses labels information. If not, but n_components < min(n_features, n_samples), we use ‘pca’, as it projects data in meaningful directions (those of higher variance). Otherwise, we just use ‘identity’. ‘pca’ n_components principal components of the inputs passed to fit will be used to initialize the transformation. (See PCA) ‘lda’ min(n_components, n_classes) most discriminative components of the inputs passed to fit will be used to initialize the transformation. (If n_components > n_classes, the rest of the components will be zero.) (See LinearDiscriminantAnalysis) ‘identity’ If n_components is strictly smaller than the dimensionality of the inputs passed to fit, the identity matrix will be truncated to the first n_components rows. ‘random’ The initial transformation will be a random array of shape (n_components, n_features). Each value is sampled from the standard normal distribution. numpy array n_features_b must match the dimensionality of the inputs passed to fit and n_features_a must be less than or equal to that. If n_components is not None, n_features_a must match it. warm_startbool, default=False If True and fit has been called before, the solution of the previous call to fit is used as the initial linear transformation (n_components and init will be ignored). max_iterint, default=50 Maximum number of iterations in the optimization. tolfloat, default=1e-5 Convergence tolerance for the optimization. callbackcallable, default=None If not None, this function is called after every iteration of the optimizer, taking as arguments the current solution (flattened transformation matrix) and the number of iterations. This might be useful in case one wants to examine or store the transformation found after each iteration. verboseint, default=0 If 0, no progress messages will be printed. If 1, progress messages will be printed to stdout. If > 1, progress messages will be printed and the disp parameter of scipy.optimize.minimize will be set to verbose - 2. random_stateint or numpy.RandomState, default=None A pseudo random number generator object or a seed for it if int. If init='random', random_state is used to initialize the random transformation. If init='pca', random_state is passed as an argument to PCA when initializing the transformation. Pass an int for reproducible results across multiple function calls. See :term: Glossary <random_state>. Attributes components_ndarray of shape (n_components, n_features) The linear transformation learned during fitting. n_iter_int Counts the number of iterations performed by the optimizer. random_state_numpy.RandomState Pseudo random number generator object used during initialization. References 1 J. Goldberger, G. Hinton, S. Roweis, R. Salakhutdinov. “Neighbourhood Components Analysis”. Advances in Neural Information Processing Systems. 17, 513-520, 2005. http://www.cs.nyu.edu/~roweis/papers/ncanips.pdf 2 Wikipedia entry on Neighborhood Components Analysis https://en.wikipedia.org/wiki/Neighbourhood_components_analysis Examples >>> from sklearn.neighbors import NeighborhoodComponentsAnalysis >>> from sklearn.neighbors import KNeighborsClassifier >>> from sklearn.datasets import load_iris >>> from sklearn.model_selection import train_test_split >>> X, y = load_iris(return_X_y=True) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... stratify=y, test_size=0.7, random_state=42) >>> nca = NeighborhoodComponentsAnalysis(random_state=42) >>> nca.fit(X_train, y_train) NeighborhoodComponentsAnalysis(...) >>> knn = KNeighborsClassifier(n_neighbors=3) >>> knn.fit(X_train, y_train) KNeighborsClassifier(...) >>> print(knn.score(X_test, y_test)) 0.933333... >>> knn.fit(nca.transform(X_train), y_train) KNeighborsClassifier(...) >>> print(knn.score(nca.transform(X_test), y_test)) 0.961904... Methods fit(X, y) Fit the model according to the given training data. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. set_params(**params) Set the parameters of this estimator. transform(X) Applies the learned transformation to the given data. fit(X, y) [source] Fit the model according to the given training data. Parameters Xarray-like of shape (n_samples, n_features) The training samples. yarray-like of shape (n_samples,) The corresponding training labels. Returns selfobject returns a trained NeighborhoodComponentsAnalysis model. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Applies the learned transformation to the given data. Parameters Xarray-like of shape (n_samples, n_features) Data samples. Returns X_embedded: ndarray of shape (n_samples, n_components) The data samples transformed. Raises NotFittedError If fit has not been called before. Examples using sklearn.neighbors.NeighborhoodComponentsAnalysis Manifold learning on handwritten digits: Locally Linear Embedding, Isomap… Neighborhood Components Analysis Illustration Comparing Nearest Neighbors with and without Neighborhood Components Analysis Dimensionality Reduction with Neighborhood Components Analysis
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis
fit(X, y) [source] Fit the model according to the given training data. Parameters Xarray-like of shape (n_samples, n_features) The training samples. yarray-like of shape (n_samples,) The corresponding training labels. Returns selfobject returns a trained NeighborhoodComponentsAnalysis model.
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis.get_params
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis.set_params
transform(X) [source] Applies the learned transformation to the given data. Parameters Xarray-like of shape (n_samples, n_features) Data samples. Returns X_embedded: ndarray of shape (n_samples, n_components) The data samples transformed. Raises NotFittedError If fit has not been called before.
sklearn.modules.generated.sklearn.neighbors.neighborhoodcomponentsanalysis#sklearn.neighbors.NeighborhoodComponentsAnalysis.transform
class sklearn.neighbors.RadiusNeighborsClassifier(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', outlier_label=None, metric_params=None, n_jobs=None, **kwargs) [source] Classifier implementing a vote among neighbors within a given radius Read more in the User Guide. Parameters radiusfloat, default=1.0 Range of parameter space to use by default for radius_neighbors queries. weights{‘uniform’, ‘distance’} or callable, default=’uniform’ weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights. Uniform weights are used by default. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. pint, default=2 Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metricstr or callable, default=’minkowski’ the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors. outlier_label{manual label, ‘most_frequent’}, default=None label for outlier samples (samples with no neighbors in given radius). manual label: str or int label (should be the same type as y) or list of manual labels if multi-output is used. ‘most_frequent’ : assign the most frequent label of y to outliers. None : when any outlier is detected, ValueError will be raised. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes classes_ndarray of shape (n_classes,) Class labels known to the classifier. effective_metric_str or callable The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. outlier_label_int or array-like of shape (n_class,) Label which is given for outlier samples (samples with no neighbors on given radius). outputs_2d_bool False when y’s shape is (n_samples, ) or (n_samples, 1) during fit otherwise True. See also KNeighborsClassifier RadiusNeighborsRegressor KNeighborsRegressor NearestNeighbors Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> X = [[0], [1], [2], [3]] >>> y = [0, 0, 1, 1] >>> from sklearn.neighbors import RadiusNeighborsClassifier >>> neigh = RadiusNeighborsClassifier(radius=1.0) >>> neigh.fit(X, y) RadiusNeighborsClassifier(...) >>> print(neigh.predict([[1.5]])) [0] >>> print(neigh.predict_proba([[1.0]])) [[0.66666667 0.33333333]] Methods fit(X, y) Fit the radius neighbors classifier from the training dataset. get_params([deep]) Get parameters for this estimator. predict(X) Predict the class labels for the provided data. predict_proba(X) Return probability estimates for the test data X. radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the radius neighbors classifier from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. Returns selfRadiusNeighborsClassifier The fitted radius neighbors classifier. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the class labels for the provided data. Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns yndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample. predict_proba(X) [source] Return probability estimates for the test data X. Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns pndarray of shape (n_queries, n_classes), or a list of n_outputs of such arrays if n_outputs > 1. The class probabilities of the input samples. Classes are ordered by lexicographic order. radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier
sklearn.neighbors.RadiusNeighborsClassifier class sklearn.neighbors.RadiusNeighborsClassifier(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', outlier_label=None, metric_params=None, n_jobs=None, **kwargs) [source] Classifier implementing a vote among neighbors within a given radius Read more in the User Guide. Parameters radiusfloat, default=1.0 Range of parameter space to use by default for radius_neighbors queries. weights{‘uniform’, ‘distance’} or callable, default=’uniform’ weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights. Uniform weights are used by default. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. pint, default=2 Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metricstr or callable, default=’minkowski’ the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors. outlier_label{manual label, ‘most_frequent’}, default=None label for outlier samples (samples with no neighbors in given radius). manual label: str or int label (should be the same type as y) or list of manual labels if multi-output is used. ‘most_frequent’ : assign the most frequent label of y to outliers. None : when any outlier is detected, ValueError will be raised. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes classes_ndarray of shape (n_classes,) Class labels known to the classifier. effective_metric_str or callable The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. outlier_label_int or array-like of shape (n_class,) Label which is given for outlier samples (samples with no neighbors on given radius). outputs_2d_bool False when y’s shape is (n_samples, ) or (n_samples, 1) during fit otherwise True. See also KNeighborsClassifier RadiusNeighborsRegressor KNeighborsRegressor NearestNeighbors Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> X = [[0], [1], [2], [3]] >>> y = [0, 0, 1, 1] >>> from sklearn.neighbors import RadiusNeighborsClassifier >>> neigh = RadiusNeighborsClassifier(radius=1.0) >>> neigh.fit(X, y) RadiusNeighborsClassifier(...) >>> print(neigh.predict([[1.5]])) [0] >>> print(neigh.predict_proba([[1.0]])) [[0.66666667 0.33333333]] Methods fit(X, y) Fit the radius neighbors classifier from the training dataset. get_params([deep]) Get parameters for this estimator. predict(X) Predict the class labels for the provided data. predict_proba(X) Return probability estimates for the test data X. radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the radius neighbors classifier from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. Returns selfRadiusNeighborsClassifier The fitted radius neighbors classifier. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the class labels for the provided data. Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns yndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample. predict_proba(X) [source] Return probability estimates for the test data X. Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns pndarray of shape (n_queries, n_classes), or a list of n_outputs of such arrays if n_outputs > 1. The class probabilities of the input samples. Classes are ordered by lexicographic order. radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier
fit(X, y) [source] Fit the radius neighbors classifier from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. Returns selfRadiusNeighborsClassifier The fitted radius neighbors classifier.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.get_params
predict(X) [source] Predict the class labels for the provided data. Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns yndarray of shape (n_queries,) or (n_queries, n_outputs) Class labels for each data sample.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.predict
predict_proba(X) [source] Return probability estimates for the test data X. Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns pndarray of shape (n_queries, n_classes), or a list of n_outputs of such arrays if n_outputs > 1. The class probabilities of the input samples. Classes are ordered by lexicographic order.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.predict_proba
radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors
radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]])
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.radius_neighbors_graph
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsclassifier#sklearn.neighbors.RadiusNeighborsClassifier.set_params
class sklearn.neighbors.RadiusNeighborsRegressor(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs) [source] Regression based on neighbors within a fixed radius. The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set. Read more in the User Guide. New in version 0.9. Parameters radiusfloat, default=1.0 Range of parameter space to use by default for radius_neighbors queries. weights{‘uniform’, ‘distance’} or callable, default=’uniform’ weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights. Uniform weights are used by default. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. pint, default=2 Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metricstr or callable, default=’minkowski’ the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes effective_metric_str or callable The distance metric to use. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. See also NearestNeighbors KNeighborsRegressor KNeighborsClassifier RadiusNeighborsClassifier Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> X = [[0], [1], [2], [3]] >>> y = [0, 0, 1, 1] >>> from sklearn.neighbors import RadiusNeighborsRegressor >>> neigh = RadiusNeighborsRegressor(radius=1.0) >>> neigh.fit(X, y) RadiusNeighborsRegressor(...) >>> print(neigh.predict([[1.5]])) [0.5] Methods fit(X, y) Fit the radius neighbors regressor from the training dataset. get_params([deep]) Get parameters for this estimator. predict(X) Predict the target for the provided data radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the radius neighbors regressor from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. Returns selfRadiusNeighborsRegressor The fitted radius neighbors regressor. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the target for the provided data Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns yndarray of shape (n_queries,) or (n_queries, n_outputs), dtype=double Target values. radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor
sklearn.neighbors.RadiusNeighborsRegressor class sklearn.neighbors.RadiusNeighborsRegressor(radius=1.0, *, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs) [source] Regression based on neighbors within a fixed radius. The target is predicted by local interpolation of the targets associated of the nearest neighbors in the training set. Read more in the User Guide. New in version 0.9. Parameters radiusfloat, default=1.0 Range of parameter space to use by default for radius_neighbors queries. weights{‘uniform’, ‘distance’} or callable, default=’uniform’ weight function used in prediction. Possible values: ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally. ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away. [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights. Uniform weights are used by default. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. pint, default=2 Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metricstr or callable, default=’minkowski’ the distance metric to use for the tree. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. See the documentation of DistanceMetric for a list of available metrics. If metric is “precomputed”, X is assumed to be a distance matrix and must be square during fit. X may be a sparse graph, in which case only “nonzero” elements may be considered neighbors. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Attributes effective_metric_str or callable The distance metric to use. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. See also NearestNeighbors KNeighborsRegressor KNeighborsClassifier RadiusNeighborsClassifier Notes See Nearest Neighbors in the online documentation for a discussion of the choice of algorithm and leaf_size. https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm Examples >>> X = [[0], [1], [2], [3]] >>> y = [0, 0, 1, 1] >>> from sklearn.neighbors import RadiusNeighborsRegressor >>> neigh = RadiusNeighborsRegressor(radius=1.0) >>> neigh.fit(X, y) RadiusNeighborsRegressor(...) >>> print(neigh.predict([[1.5]])) [0.5] Methods fit(X, y) Fit the radius neighbors regressor from the training dataset. get_params([deep]) Get parameters for this estimator. predict(X) Predict the target for the provided data radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the radius neighbors regressor from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. Returns selfRadiusNeighborsRegressor The fitted radius neighbors regressor. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the target for the provided data Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns yndarray of shape (n_queries,) or (n_queries, n_outputs), dtype=double Target values. radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor
fit(X, y) [source] Fit the radius neighbors regressor from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. y{array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_outputs) Target values. Returns selfRadiusNeighborsRegressor The fitted radius neighbors regressor.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.get_params
predict(X) [source] Predict the target for the provided data Parameters Xarray-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’ Test samples. Returns yndarray of shape (n_queries,) or (n_queries, n_outputs), dtype=double Target values.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.predict
radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors
radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]])
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.radius_neighbors_graph
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborsregressor#sklearn.neighbors.RadiusNeighborsRegressor.set_params
class sklearn.neighbors.RadiusNeighborsTransformer(*, mode='distance', radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=1) [source] Transform X into a (weighted) graph of neighbors nearer than a radius The transformed data is a sparse graph as returned by radius_neighbors_graph. Read more in the User Guide. New in version 0.22. Parameters mode{‘distance’, ‘connectivity’}, default=’distance’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric. radiusfloat, default=1. Radius of neighborhood in the transformed sparse graph. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstr or callable, default=’minkowski’ metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=1 The number of parallel jobs to run for neighbors search. If -1, then the number of jobs is set to the number of CPU cores. Attributes effective_metric_str or callable The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. Examples >>> from sklearn.cluster import DBSCAN >>> from sklearn.neighbors import RadiusNeighborsTransformer >>> from sklearn.pipeline import make_pipeline >>> estimator = make_pipeline( ... RadiusNeighborsTransformer(radius=42.0, mode='distance'), ... DBSCAN(min_samples=30, metric='precomputed')) Methods fit(X[, y]) Fit the radius neighbors transformer from the training dataset. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X set_params(**params) Set the parameters of this estimator. transform(X) Computes the (weighted) graph of Neighbors for points in X fit(X, y=None) [source] Fit the radius neighbors transformer from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. Returns selfRadiusNeighborsTransformer The fitted radius neighbors transformer. fit_transform(X, y=None) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Training set. yignored Returns Xtsparse matrix of shape (n_samples, n_samples) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Computes the (weighted) graph of Neighbors for points in X Parameters Xarray-like of shape (n_samples_transform, n_features) Sample data Returns Xtsparse matrix of shape (n_samples_transform, n_samples_fit) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer
sklearn.neighbors.RadiusNeighborsTransformer class sklearn.neighbors.RadiusNeighborsTransformer(*, mode='distance', radius=1.0, algorithm='auto', leaf_size=30, metric='minkowski', p=2, metric_params=None, n_jobs=1) [source] Transform X into a (weighted) graph of neighbors nearer than a radius The transformed data is a sparse graph as returned by radius_neighbors_graph. Read more in the User Guide. New in version 0.22. Parameters mode{‘distance’, ‘connectivity’}, default=’distance’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric. radiusfloat, default=1. Radius of neighborhood in the transformed sparse graph. algorithm{‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’ Algorithm used to compute the nearest neighbors: ‘ball_tree’ will use BallTree ‘kd_tree’ will use KDTree ‘brute’ will use a brute-force search. ‘auto’ will attempt to decide the most appropriate algorithm based on the values passed to fit method. Note: fitting on sparse input will override the setting of this parameter, using brute force. leaf_sizeint, default=30 Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem. metricstr or callable, default=’minkowski’ metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. pint, default=2 Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metric_paramsdict, default=None Additional keyword arguments for the metric function. n_jobsint, default=1 The number of parallel jobs to run for neighbors search. If -1, then the number of jobs is set to the number of CPU cores. Attributes effective_metric_str or callable The distance metric used. It will be same as the metric parameter or a synonym of it, e.g. ‘euclidean’ if the metric parameter set to ‘minkowski’ and p parameter set to 2. effective_metric_params_dict Additional keyword arguments for the metric function. For most metrics will be same with metric_params parameter, but may also contain the p parameter value if the effective_metric_ attribute is set to ‘minkowski’. n_samples_fit_int Number of samples in the fitted data. Examples >>> from sklearn.cluster import DBSCAN >>> from sklearn.neighbors import RadiusNeighborsTransformer >>> from sklearn.pipeline import make_pipeline >>> estimator = make_pipeline( ... RadiusNeighborsTransformer(radius=42.0, mode='distance'), ... DBSCAN(min_samples=30, metric='precomputed')) Methods fit(X[, y]) Fit the radius neighbors transformer from the training dataset. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. radius_neighbors([X, radius, …]) Finds the neighbors within a given radius of a point or points. radius_neighbors_graph([X, radius, mode, …]) Computes the (weighted) graph of Neighbors for points in X set_params(**params) Set the parameters of this estimator. transform(X) Computes the (weighted) graph of Neighbors for points in X fit(X, y=None) [source] Fit the radius neighbors transformer from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. Returns selfRadiusNeighborsTransformer The fitted radius neighbors transformer. fit_transform(X, y=None) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Training set. yignored Returns Xtsparse matrix of shape (n_samples, n_samples) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time. radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]]) set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Computes the (weighted) graph of Neighbors for points in X Parameters Xarray-like of shape (n_samples_transform, n_features) Sample data Returns Xtsparse matrix of shape (n_samples_transform, n_samples_fit) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer
fit(X, y=None) [source] Fit the radius neighbors transformer from the training dataset. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’ Training data. Returns selfRadiusNeighborsTransformer The fitted radius neighbors transformer.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.fit
fit_transform(X, y=None) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Training set. yignored Returns Xtsparse matrix of shape (n_samples, n_samples) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.get_params
radius_neighbors(X=None, radius=None, return_distance=True, sort_results=False) [source] Finds the neighbors within a given radius of a point or points. Return the indices and distances of each point from the dataset lying in a ball with size radius around the points of the query array. Points lying on the boundary are included in the results. The result points are not necessarily sorted by distance to their query point. Parameters Xarray-like of (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Limiting distance of neighbors to return. The default is the value passed to the constructor. return_distancebool, default=True Whether or not to return the distances. sort_resultsbool, default=False If True, the distances and indices will be sorted by increasing distances before being returned. If False, the results may not be sorted. If return_distance=False, setting sort_results=True will result in an error. New in version 0.22. Returns neigh_distndarray of shape (n_samples,) of arrays Array representing the distances to each point, only present if return_distance=True. The distance values are computed according to the metric constructor parameter. neigh_indndarray of shape (n_samples,) of arrays An array of arrays of indices of the approximate nearest points from the population matrix that lie within a ball of size radius around the query points. Notes Because the number of neighbors of each point is not necessarily equal, the results for multiple query points cannot be fit in a standard data array. For efficiency, radius_neighbors returns arrays of objects, where each object is a 1D array of indices or distances. Examples In the following example, we construct a NeighborsClassifier class from an array representing our data set and ask who’s the closest point to [1, 1, 1]: >>> import numpy as np >>> samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.6) >>> neigh.fit(samples) NearestNeighbors(radius=1.6) >>> rng = neigh.radius_neighbors([[1., 1., 1.]]) >>> print(np.asarray(rng[0][0])) [1.5 0.5] >>> print(np.asarray(rng[1][0])) [1 2] The first array returned contains the distances to all points which are closer than 1.6, while the second array returned contains their indices. In general, multiple points can be queried at the same time.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.radius_neighbors
radius_neighbors_graph(X=None, radius=None, mode='connectivity', sort_results=False) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Parameters Xarray-like of shape (n_samples, n_features), default=None The query point or points. If not provided, neighbors of each indexed point are returned. In this case, the query point is not considered its own neighbor. radiusfloat, default=None Radius of neighborhoods. The default is the value passed to the constructor. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points. sort_resultsbool, default=False If True, in each row of the result, the non-zero entries will be sorted by increasing distances. If False, the non-zero entries may not be sorted. Only used with mode=’distance’. New in version 0.22. Returns Asparse-matrix of shape (n_queries, n_samples_fit) n_samples_fit is the number of samples in the fitted data A[i, j] is assigned the weight of edge that connects i to j. The matrix if of format CSR. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import NearestNeighbors >>> neigh = NearestNeighbors(radius=1.5) >>> neigh.fit(X) NearestNeighbors(radius=1.5) >>> A = neigh.radius_neighbors_graph(X) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]])
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.radius_neighbors_graph
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.set_params
transform(X) [source] Computes the (weighted) graph of Neighbors for points in X Parameters Xarray-like of shape (n_samples_transform, n_features) Sample data Returns Xtsparse matrix of shape (n_samples_transform, n_samples_fit) Xt[i, j] is assigned the weight of edge that connects i to j. Only the neighbors have an explicit value. The diagonal is always explicit. The matrix is of CSR format.
sklearn.modules.generated.sklearn.neighbors.radiusneighborstransformer#sklearn.neighbors.RadiusNeighborsTransformer.transform
sklearn.neighbors.radius_neighbors_graph(X, radius, *, mode='connectivity', metric='minkowski', p=2, metric_params=None, include_self=False, n_jobs=None) [source] Computes the (weighted) graph of Neighbors for points in X Neighborhoods are restricted the points at a distance lower than radius. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) or BallTree Sample data, in the form of a numpy array or a precomputed BallTree. radiusfloat Radius of neighborhoods. mode{‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric. metricstr, default=’minkowski’ The distance metric used to calculate the neighbors within a given radius for each sample point. The DistanceMetric class gives a list of available metrics. The default distance is ‘euclidean’ (‘minkowski’ metric with the param equal to 2.) pint, default=2 Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. metric_paramsdict, default=None additional keyword arguments for the metric function. include_selfbool or ‘auto’, default=False Whether or not to mark each sample as the first nearest neighbor to itself. If ‘auto’, then True is used for mode=’connectivity’ and False for mode=’distance’. n_jobsint, default=None The number of parallel jobs to run for neighbors search. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Returns Asparse matrix of shape (n_samples, n_samples) Graph where A[i, j] is assigned the weight of edge that connects i to j. The matrix is of CSR format. See also kneighbors_graph Examples >>> X = [[0], [3], [1]] >>> from sklearn.neighbors import radius_neighbors_graph >>> A = radius_neighbors_graph(X, 1.5, mode='connectivity', ... include_self=True) >>> A.toarray() array([[1., 0., 1.], [0., 1., 0.], [1., 0., 1.]])
sklearn.modules.generated.sklearn.neighbors.radius_neighbors_graph#sklearn.neighbors.radius_neighbors_graph
class sklearn.neural_network.BernoulliRBM(n_components=256, *, learning_rate=0.1, batch_size=10, n_iter=10, verbose=0, random_state=None) [source] Bernoulli Restricted Boltzmann Machine (RBM). A Restricted Boltzmann Machine with binary visible units and binary hidden units. Parameters are estimated using Stochastic Maximum Likelihood (SML), also known as Persistent Contrastive Divergence (PCD) [2]. The time complexity of this implementation is O(d ** 2) assuming d ~ n_features ~ n_components. Read more in the User Guide. Parameters n_componentsint, default=256 Number of binary hidden units. learning_ratefloat, default=0.1 The learning rate for weight updates. It is highly recommended to tune this hyper-parameter. Reasonable values are in the 10**[0., -3.] range. batch_sizeint, default=10 Number of examples per minibatch. n_iterint, default=10 Number of iterations/sweeps over the training dataset to perform during training. verboseint, default=0 The verbosity level. The default, zero, means silent mode. random_stateint, RandomState instance or None, default=None Determines random number generation for: Gibbs sampling from visible and hidden layers. Initializing components, sampling from layers during fit. Corrupting the data when scoring samples. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes intercept_hidden_array-like of shape (n_components,) Biases of the hidden units. intercept_visible_array-like of shape (n_features,) Biases of the visible units. components_array-like of shape (n_components, n_features) Weight matrix, where n_features in the number of visible units and n_components is the number of hidden units. h_samples_array-like of shape (batch_size, n_components) Hidden Activation sampled from the model distribution, where batch_size in the number of examples per minibatch and n_components is the number of hidden units. References [1] Hinton, G. E., Osindero, S. and Teh, Y. A fast learning algorithm for deep belief nets. Neural Computation 18, pp 1527-1554. https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf [2] Tieleman, T. Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient. International Conference on Machine Learning (ICML) 2008 Examples >>> import numpy as np >>> from sklearn.neural_network import BernoulliRBM >>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) >>> model = BernoulliRBM(n_components=2) >>> model.fit(X) BernoulliRBM(n_components=2) Methods fit(X[, y]) Fit the model to the data X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. gibbs(v) Perform one Gibbs sampling step. partial_fit(X[, y]) Fit the model to the data X which should contain a partial segment of the data. score_samples(X) Compute the pseudo-likelihood of X. set_params(**params) Set the parameters of this estimator. transform(X) Compute the hidden layer activation probabilities, P(h=1|v=X). fit(X, y=None) [source] Fit the model to the data X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Returns selfBernoulliRBM The fitted model. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. gibbs(v) [source] Perform one Gibbs sampling step. Parameters vndarray of shape (n_samples, n_features) Values of the visible layer to start from. Returns v_newndarray of shape (n_samples, n_features) Values of the visible layer after one Gibbs step. partial_fit(X, y=None) [source] Fit the model to the data X which should contain a partial segment of the data. Parameters Xndarray of shape (n_samples, n_features) Training data. Returns selfBernoulliRBM The fitted model. score_samples(X) [source] Compute the pseudo-likelihood of X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Values of the visible layer. Must be all-boolean (not checked). Returns pseudo_likelihoodndarray of shape (n_samples,) Value of the pseudo-likelihood (proxy for likelihood). Notes This method is not deterministic: it computes a quantity called the free energy on X, then on a randomly corrupted version of X, and returns the log of the logistic function of the difference. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Compute the hidden layer activation probabilities, P(h=1|v=X). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data to be transformed. Returns hndarray of shape (n_samples, n_components) Latent representations of the data.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM
sklearn.neural_network.BernoulliRBM class sklearn.neural_network.BernoulliRBM(n_components=256, *, learning_rate=0.1, batch_size=10, n_iter=10, verbose=0, random_state=None) [source] Bernoulli Restricted Boltzmann Machine (RBM). A Restricted Boltzmann Machine with binary visible units and binary hidden units. Parameters are estimated using Stochastic Maximum Likelihood (SML), also known as Persistent Contrastive Divergence (PCD) [2]. The time complexity of this implementation is O(d ** 2) assuming d ~ n_features ~ n_components. Read more in the User Guide. Parameters n_componentsint, default=256 Number of binary hidden units. learning_ratefloat, default=0.1 The learning rate for weight updates. It is highly recommended to tune this hyper-parameter. Reasonable values are in the 10**[0., -3.] range. batch_sizeint, default=10 Number of examples per minibatch. n_iterint, default=10 Number of iterations/sweeps over the training dataset to perform during training. verboseint, default=0 The verbosity level. The default, zero, means silent mode. random_stateint, RandomState instance or None, default=None Determines random number generation for: Gibbs sampling from visible and hidden layers. Initializing components, sampling from layers during fit. Corrupting the data when scoring samples. Pass an int for reproducible results across multiple function calls. See Glossary. Attributes intercept_hidden_array-like of shape (n_components,) Biases of the hidden units. intercept_visible_array-like of shape (n_features,) Biases of the visible units. components_array-like of shape (n_components, n_features) Weight matrix, where n_features in the number of visible units and n_components is the number of hidden units. h_samples_array-like of shape (batch_size, n_components) Hidden Activation sampled from the model distribution, where batch_size in the number of examples per minibatch and n_components is the number of hidden units. References [1] Hinton, G. E., Osindero, S. and Teh, Y. A fast learning algorithm for deep belief nets. Neural Computation 18, pp 1527-1554. https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf [2] Tieleman, T. Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient. International Conference on Machine Learning (ICML) 2008 Examples >>> import numpy as np >>> from sklearn.neural_network import BernoulliRBM >>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) >>> model = BernoulliRBM(n_components=2) >>> model.fit(X) BernoulliRBM(n_components=2) Methods fit(X[, y]) Fit the model to the data X. fit_transform(X[, y]) Fit to data, then transform it. get_params([deep]) Get parameters for this estimator. gibbs(v) Perform one Gibbs sampling step. partial_fit(X[, y]) Fit the model to the data X which should contain a partial segment of the data. score_samples(X) Compute the pseudo-likelihood of X. set_params(**params) Set the parameters of this estimator. transform(X) Compute the hidden layer activation probabilities, P(h=1|v=X). fit(X, y=None) [source] Fit the model to the data X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Returns selfBernoulliRBM The fitted model. fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. gibbs(v) [source] Perform one Gibbs sampling step. Parameters vndarray of shape (n_samples, n_features) Values of the visible layer to start from. Returns v_newndarray of shape (n_samples, n_features) Values of the visible layer after one Gibbs step. partial_fit(X, y=None) [source] Fit the model to the data X which should contain a partial segment of the data. Parameters Xndarray of shape (n_samples, n_features) Training data. Returns selfBernoulliRBM The fitted model. score_samples(X) [source] Compute the pseudo-likelihood of X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Values of the visible layer. Must be all-boolean (not checked). Returns pseudo_likelihoodndarray of shape (n_samples,) Value of the pseudo-likelihood (proxy for likelihood). Notes This method is not deterministic: it computes a quantity called the free energy on X, then on a randomly corrupted version of X, and returns the log of the logistic function of the difference. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. transform(X) [source] Compute the hidden layer activation probabilities, P(h=1|v=X). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data to be transformed. Returns hndarray of shape (n_samples, n_components) Latent representations of the data. Examples using sklearn.neural_network.BernoulliRBM Restricted Boltzmann Machine features for digit classification
sklearn.modules.generated.sklearn.neural_network.bernoullirbm
fit(X, y=None) [source] Fit the model to the data X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Training data. Returns selfBernoulliRBM The fitted model.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.fit
fit_transform(X, y=None, **fit_params) [source] Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. Parameters Xarray-like of shape (n_samples, n_features) Input samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None Target values (None for unsupervised transformations). **fit_paramsdict Additional fit parameters. Returns X_newndarray array of shape (n_samples, n_features_new) Transformed array.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.get_params
gibbs(v) [source] Perform one Gibbs sampling step. Parameters vndarray of shape (n_samples, n_features) Values of the visible layer to start from. Returns v_newndarray of shape (n_samples, n_features) Values of the visible layer after one Gibbs step.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.gibbs
partial_fit(X, y=None) [source] Fit the model to the data X which should contain a partial segment of the data. Parameters Xndarray of shape (n_samples, n_features) Training data. Returns selfBernoulliRBM The fitted model.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.partial_fit
score_samples(X) [source] Compute the pseudo-likelihood of X. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) Values of the visible layer. Must be all-boolean (not checked). Returns pseudo_likelihoodndarray of shape (n_samples,) Value of the pseudo-likelihood (proxy for likelihood). Notes This method is not deterministic: it computes a quantity called the free energy on X, then on a randomly corrupted version of X, and returns the log of the logistic function of the difference.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.score_samples
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.set_params
transform(X) [source] Compute the hidden layer activation probabilities, P(h=1|v=X). Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The data to be transformed. Returns hndarray of shape (n_samples, n_components) Latent representations of the data.
sklearn.modules.generated.sklearn.neural_network.bernoullirbm#sklearn.neural_network.BernoulliRBM.transform
class sklearn.neural_network.MLPClassifier(hidden_layer_sizes=100, activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000) [source] Multi-layer Perceptron classifier. This model optimizes the log-loss function using LBFGS or stochastic gradient descent. New in version 0.18. Parameters hidden_layer_sizestuple, length = n_layers - 2, default=(100,) The ith element represents the number of neurons in the ith hidden layer. activation{‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default=’relu’ Activation function for the hidden layer. ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)). ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x). ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) solver{‘lbfgs’, ‘sgd’, ‘adam’}, default=’adam’ The solver for weight optimization. ‘lbfgs’ is an optimizer in the family of quasi-Newton methods. ‘sgd’ refers to stochastic gradient descent. ‘adam’ refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better. alphafloat, default=0.0001 L2 penalty (regularization term) parameter. batch_sizeint, default=’auto’ Size of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch_size=min(200, n_samples) learning_rate{‘constant’, ‘invscaling’, ‘adaptive’}, default=’constant’ Learning rate schedule for weight updates. ‘constant’ is a constant learning rate given by ‘learning_rate_init’. ‘invscaling’ gradually decreases the learning rate at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t) ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5. Only used when solver='sgd'. learning_rate_initdouble, default=0.001 The initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’. power_tdouble, default=0.5 The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’. max_iterint, default=200 Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps. shufflebool, default=True Whether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’. random_stateint, RandomState instance, default=None Determines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver=’sgd’ or ‘adam’. Pass an int for reproducible results across multiple function calls. See Glossary. tolfloat, default=1e-4 Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops. verbosebool, default=False Whether to print progress messages to stdout. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. momentumfloat, default=0.9 Momentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’. nesterovs_momentumbool, default=True Whether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. The split is stratified, except in a multilabel setting. Only effective when solver=’sgd’ or ‘adam’ validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True beta_1float, default=0.9 Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver=’adam’ beta_2float, default=0.999 Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’ epsilonfloat, default=1e-8 Value for numerical stability in adam. Only used when solver=’adam’ n_iter_no_changeint, default=10 Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’ New in version 0.20. max_funint, default=15000 Only used when solver=’lbfgs’. Maximum number of loss function calls. The solver iterates until convergence (determined by ‘tol’), number of iterations reaches max_iter, or this number of loss function calls. Note that number of loss function calls will be greater than or equal to the number of iterations for the MLPClassifier. New in version 0.22. Attributes classes_ndarray or list of ndarray of shape (n_classes,) Class labels for each output. loss_float The current loss computed with the loss function. best_loss_float The minimum loss reached by the solver throughout fitting. loss_curve_list of shape (n_iter_,) The ith element in the list represents the loss at the ith iteration. t_int The number of training samples seen by the solver during fitting. coefs_list of shape (n_layers - 1,) The ith element in the list represents the weight matrix corresponding to layer i. intercepts_list of shape (n_layers - 1,) The ith element in the list represents the bias vector corresponding to layer i + 1. n_iter_int The number of iterations the solver has ran. n_layers_int Number of layers. n_outputs_int Number of outputs. out_activation_str Name of the output activation function. Notes MLPClassifier trains iteratively since at each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the parameters. It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting. This implementation works with data represented as dense numpy arrays or sparse scipy arrays of floating point values. References Hinton, Geoffrey E. “Connectionist learning procedures.” Artificial intelligence 40.1 (1989): 185-234. Glorot, Xavier, and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks.” International Conference on Artificial Intelligence and Statistics. 2010. He, Kaiming, et al. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” arXiv preprint arXiv:1502.01852 (2015). Kingma, Diederik, and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). Examples >>> from sklearn.neural_network import MLPClassifier >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import train_test_split >>> X, y = make_classification(n_samples=100, random_state=1) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, ... random_state=1) >>> clf = MLPClassifier(random_state=1, max_iter=300).fit(X_train, y_train) >>> clf.predict_proba(X_test[:1]) array([[0.038..., 0.961...]]) >>> clf.predict(X_test[:5, :]) array([1, 0, 1, 0, 1]) >>> clf.score(X_test, y_test) 0.8... Methods fit(X, y) Fit the model to data matrix X and target(s) y. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the multi-layer perceptron classifier predict_log_proba(X) Return the log of probability estimates. predict_proba(X) Probability estimates. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model to data matrix X and target(s) y. Parameters Xndarray or sparse matrix of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) or (n_samples, n_outputs) The target values (class labels in classification, real numbers in regression). Returns selfreturns a trained MLP model. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. property partial_fit Update the model with a single iteration over the given data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. yarray-like of shape (n_samples,) The target values. classesarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns a trained MLP model. predict(X) [source] Predict using the multi-layer perceptron classifier Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns yndarray, shape (n_samples,) or (n_samples, n_classes) The predicted classes. predict_log_proba(X) [source] Return the log of probability estimates. Parameters Xndarray of shape (n_samples, n_features) The input data. Returns log_y_probndarray of shape (n_samples, n_classes) The predicted log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. Equivalent to log(predict_proba(X)) predict_proba(X) [source] Probability estimates. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns y_probndarray of shape (n_samples, n_classes) The predicted probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier
sklearn.neural_network.MLPClassifier class sklearn.neural_network.MLPClassifier(hidden_layer_sizes=100, activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000) [source] Multi-layer Perceptron classifier. This model optimizes the log-loss function using LBFGS or stochastic gradient descent. New in version 0.18. Parameters hidden_layer_sizestuple, length = n_layers - 2, default=(100,) The ith element represents the number of neurons in the ith hidden layer. activation{‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default=’relu’ Activation function for the hidden layer. ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)). ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x). ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) solver{‘lbfgs’, ‘sgd’, ‘adam’}, default=’adam’ The solver for weight optimization. ‘lbfgs’ is an optimizer in the family of quasi-Newton methods. ‘sgd’ refers to stochastic gradient descent. ‘adam’ refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better. alphafloat, default=0.0001 L2 penalty (regularization term) parameter. batch_sizeint, default=’auto’ Size of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch_size=min(200, n_samples) learning_rate{‘constant’, ‘invscaling’, ‘adaptive’}, default=’constant’ Learning rate schedule for weight updates. ‘constant’ is a constant learning rate given by ‘learning_rate_init’. ‘invscaling’ gradually decreases the learning rate at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t) ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5. Only used when solver='sgd'. learning_rate_initdouble, default=0.001 The initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’. power_tdouble, default=0.5 The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’. max_iterint, default=200 Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps. shufflebool, default=True Whether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’. random_stateint, RandomState instance, default=None Determines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver=’sgd’ or ‘adam’. Pass an int for reproducible results across multiple function calls. See Glossary. tolfloat, default=1e-4 Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops. verbosebool, default=False Whether to print progress messages to stdout. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. momentumfloat, default=0.9 Momentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’. nesterovs_momentumbool, default=True Whether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. The split is stratified, except in a multilabel setting. Only effective when solver=’sgd’ or ‘adam’ validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True beta_1float, default=0.9 Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver=’adam’ beta_2float, default=0.999 Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’ epsilonfloat, default=1e-8 Value for numerical stability in adam. Only used when solver=’adam’ n_iter_no_changeint, default=10 Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’ New in version 0.20. max_funint, default=15000 Only used when solver=’lbfgs’. Maximum number of loss function calls. The solver iterates until convergence (determined by ‘tol’), number of iterations reaches max_iter, or this number of loss function calls. Note that number of loss function calls will be greater than or equal to the number of iterations for the MLPClassifier. New in version 0.22. Attributes classes_ndarray or list of ndarray of shape (n_classes,) Class labels for each output. loss_float The current loss computed with the loss function. best_loss_float The minimum loss reached by the solver throughout fitting. loss_curve_list of shape (n_iter_,) The ith element in the list represents the loss at the ith iteration. t_int The number of training samples seen by the solver during fitting. coefs_list of shape (n_layers - 1,) The ith element in the list represents the weight matrix corresponding to layer i. intercepts_list of shape (n_layers - 1,) The ith element in the list represents the bias vector corresponding to layer i + 1. n_iter_int The number of iterations the solver has ran. n_layers_int Number of layers. n_outputs_int Number of outputs. out_activation_str Name of the output activation function. Notes MLPClassifier trains iteratively since at each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the parameters. It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting. This implementation works with data represented as dense numpy arrays or sparse scipy arrays of floating point values. References Hinton, Geoffrey E. “Connectionist learning procedures.” Artificial intelligence 40.1 (1989): 185-234. Glorot, Xavier, and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks.” International Conference on Artificial Intelligence and Statistics. 2010. He, Kaiming, et al. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” arXiv preprint arXiv:1502.01852 (2015). Kingma, Diederik, and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). Examples >>> from sklearn.neural_network import MLPClassifier >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import train_test_split >>> X, y = make_classification(n_samples=100, random_state=1) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, ... random_state=1) >>> clf = MLPClassifier(random_state=1, max_iter=300).fit(X_train, y_train) >>> clf.predict_proba(X_test[:1]) array([[0.038..., 0.961...]]) >>> clf.predict(X_test[:5, :]) array([1, 0, 1, 0, 1]) >>> clf.score(X_test, y_test) 0.8... Methods fit(X, y) Fit the model to data matrix X and target(s) y. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the multi-layer perceptron classifier predict_log_proba(X) Return the log of probability estimates. predict_proba(X) Probability estimates. score(X, y[, sample_weight]) Return the mean accuracy on the given test data and labels. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model to data matrix X and target(s) y. Parameters Xndarray or sparse matrix of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) or (n_samples, n_outputs) The target values (class labels in classification, real numbers in regression). Returns selfreturns a trained MLP model. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. property partial_fit Update the model with a single iteration over the given data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. yarray-like of shape (n_samples,) The target values. classesarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns a trained MLP model. predict(X) [source] Predict using the multi-layer perceptron classifier Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns yndarray, shape (n_samples,) or (n_samples, n_classes) The predicted classes. predict_log_proba(X) [source] Return the log of probability estimates. Parameters Xndarray of shape (n_samples, n_features) The input data. Returns log_y_probndarray of shape (n_samples, n_classes) The predicted log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. Equivalent to log(predict_proba(X)) predict_proba(X) [source] Probability estimates. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns y_probndarray of shape (n_samples, n_classes) The predicted probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.neural_network.MLPClassifier Classifier comparison Visualization of MLP weights on MNIST Compare Stochastic learning strategies for MLPClassifier Varying regularization in Multi-layer Perceptron
sklearn.modules.generated.sklearn.neural_network.mlpclassifier
fit(X, y) [source] Fit the model to data matrix X and target(s) y. Parameters Xndarray or sparse matrix of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) or (n_samples, n_outputs) The target values (class labels in classification, real numbers in regression). Returns selfreturns a trained MLP model.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.get_params
property partial_fit Update the model with a single iteration over the given data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. yarray-like of shape (n_samples,) The target values. classesarray of shape (n_classes,), default=None Classes across all calls to partial_fit. Can be obtained via np.unique(y_all), where y_all is the target vector of the entire dataset. This argument is required for the first call to partial_fit and can be omitted in the subsequent calls. Note that y doesn’t need to contain all labels in classes. Returns selfreturns a trained MLP model.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.partial_fit
predict(X) [source] Predict using the multi-layer perceptron classifier Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns yndarray, shape (n_samples,) or (n_samples, n_classes) The predicted classes.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.predict
predict_log_proba(X) [source] Return the log of probability estimates. Parameters Xndarray of shape (n_samples, n_features) The input data. Returns log_y_probndarray of shape (n_samples, n_classes) The predicted log-probability of the sample for each class in the model, where classes are ordered as they are in self.classes_. Equivalent to log(predict_proba(X))
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.predict_log_proba
predict_proba(X) [source] Probability estimates. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns y_probndarray of shape (n_samples, n_classes) The predicted probability of the sample for each class in the model, where classes are ordered as they are in self.classes_.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.predict_proba
score(X, y, sample_weight=None) [source] Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters Xarray-like of shape (n_samples, n_features) Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True labels for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat Mean accuracy of self.predict(X) wrt. y.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neural_network.mlpclassifier#sklearn.neural_network.MLPClassifier.set_params
class sklearn.neural_network.MLPRegressor(hidden_layer_sizes=100, activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000) [source] Multi-layer Perceptron regressor. This model optimizes the squared-loss using LBFGS or stochastic gradient descent. New in version 0.18. Parameters hidden_layer_sizestuple, length = n_layers - 2, default=(100,) The ith element represents the number of neurons in the ith hidden layer. activation{‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default=’relu’ Activation function for the hidden layer. ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)). ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x). ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) solver{‘lbfgs’, ‘sgd’, ‘adam’}, default=’adam’ The solver for weight optimization. ‘lbfgs’ is an optimizer in the family of quasi-Newton methods. ‘sgd’ refers to stochastic gradient descent. ‘adam’ refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better. alphafloat, default=0.0001 L2 penalty (regularization term) parameter. batch_sizeint, default=’auto’ Size of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch_size=min(200, n_samples) learning_rate{‘constant’, ‘invscaling’, ‘adaptive’}, default=’constant’ Learning rate schedule for weight updates. ‘constant’ is a constant learning rate given by ‘learning_rate_init’. ‘invscaling’ gradually decreases the learning rate learning_rate_ at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t) ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5. Only used when solver=’sgd’. learning_rate_initdouble, default=0.001 The initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’. power_tdouble, default=0.5 The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’. max_iterint, default=200 Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps. shufflebool, default=True Whether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’. random_stateint, RandomState instance, default=None Determines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver=’sgd’ or ‘adam’. Pass an int for reproducible results across multiple function calls. See Glossary. tolfloat, default=1e-4 Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops. verbosebool, default=False Whether to print progress messages to stdout. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. momentumfloat, default=0.9 Momentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’. nesterovs_momentumbool, default=True Whether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. Only effective when solver=’sgd’ or ‘adam’ validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True beta_1float, default=0.9 Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver=’adam’ beta_2float, default=0.999 Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’ epsilonfloat, default=1e-8 Value for numerical stability in adam. Only used when solver=’adam’ n_iter_no_changeint, default=10 Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’ New in version 0.20. max_funint, default=15000 Only used when solver=’lbfgs’. Maximum number of function calls. The solver iterates until convergence (determined by ‘tol’), number of iterations reaches max_iter, or this number of function calls. Note that number of function calls will be greater than or equal to the number of iterations for the MLPRegressor. New in version 0.22. Attributes loss_float The current loss computed with the loss function. best_loss_float The minimum loss reached by the solver throughout fitting. loss_curve_list of shape (n_iter_,) The ith element in the list represents the loss at the ith iteration. t_int The number of training samples seen by the solver during fitting. coefs_list of shape (n_layers - 1,) The ith element in the list represents the weight matrix corresponding to layer i. intercepts_list of shape (n_layers - 1,) The ith element in the list represents the bias vector corresponding to layer i + 1. n_iter_int The number of iterations the solver has ran. n_layers_int Number of layers. n_outputs_int Number of outputs. out_activation_str Name of the output activation function. loss_curve_list of shape (n_iters,) Loss value evaluated at the end of each training step. t_int Mathematically equals n_iters * X.shape[0], it means time_step and it is used by optimizer’s learning rate scheduler. Notes MLPRegressor trains iteratively since at each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the parameters. It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting. This implementation works with data represented as dense and sparse numpy arrays of floating point values. References Hinton, Geoffrey E. “Connectionist learning procedures.” Artificial intelligence 40.1 (1989): 185-234. Glorot, Xavier, and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks.” International Conference on Artificial Intelligence and Statistics. 2010. He, Kaiming, et al. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” arXiv preprint arXiv:1502.01852 (2015). Kingma, Diederik, and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). Examples >>> from sklearn.neural_network import MLPRegressor >>> from sklearn.datasets import make_regression >>> from sklearn.model_selection import train_test_split >>> X, y = make_regression(n_samples=200, random_state=1) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=1) >>> regr = MLPRegressor(random_state=1, max_iter=500).fit(X_train, y_train) >>> regr.predict(X_test[:2]) array([-0.9..., -7.1...]) >>> regr.score(X_test, y_test) 0.4... Methods fit(X, y) Fit the model to data matrix X and target(s) y. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the multi-layer perceptron model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model to data matrix X and target(s) y. Parameters Xndarray or sparse matrix of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) or (n_samples, n_outputs) The target values (class labels in classification, real numbers in regression). Returns selfreturns a trained MLP model. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. property partial_fit Update the model with a single iteration over the given data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) The target values. Returns selfreturns a trained MLP model. predict(X) [source] Predict using the multi-layer perceptron model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns yndarray of shape (n_samples, n_outputs) The predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor
sklearn.neural_network.MLPRegressor class sklearn.neural_network.MLPRegressor(hidden_layer_sizes=100, activation='relu', *, solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000) [source] Multi-layer Perceptron regressor. This model optimizes the squared-loss using LBFGS or stochastic gradient descent. New in version 0.18. Parameters hidden_layer_sizestuple, length = n_layers - 2, default=(100,) The ith element represents the number of neurons in the ith hidden layer. activation{‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default=’relu’ Activation function for the hidden layer. ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)). ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x). ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) solver{‘lbfgs’, ‘sgd’, ‘adam’}, default=’adam’ The solver for weight optimization. ‘lbfgs’ is an optimizer in the family of quasi-Newton methods. ‘sgd’ refers to stochastic gradient descent. ‘adam’ refers to a stochastic gradient-based optimizer proposed by Kingma, Diederik, and Jimmy Ba Note: The default solver ‘adam’ works pretty well on relatively large datasets (with thousands of training samples or more) in terms of both training time and validation score. For small datasets, however, ‘lbfgs’ can converge faster and perform better. alphafloat, default=0.0001 L2 penalty (regularization term) parameter. batch_sizeint, default=’auto’ Size of minibatches for stochastic optimizers. If the solver is ‘lbfgs’, the classifier will not use minibatch. When set to “auto”, batch_size=min(200, n_samples) learning_rate{‘constant’, ‘invscaling’, ‘adaptive’}, default=’constant’ Learning rate schedule for weight updates. ‘constant’ is a constant learning rate given by ‘learning_rate_init’. ‘invscaling’ gradually decreases the learning rate learning_rate_ at each time step ‘t’ using an inverse scaling exponent of ‘power_t’. effective_learning_rate = learning_rate_init / pow(t, power_t) ‘adaptive’ keeps the learning rate constant to ‘learning_rate_init’ as long as training loss keeps decreasing. Each time two consecutive epochs fail to decrease training loss by at least tol, or fail to increase validation score by at least tol if ‘early_stopping’ is on, the current learning rate is divided by 5. Only used when solver=’sgd’. learning_rate_initdouble, default=0.001 The initial learning rate used. It controls the step-size in updating the weights. Only used when solver=’sgd’ or ‘adam’. power_tdouble, default=0.5 The exponent for inverse scaling learning rate. It is used in updating effective learning rate when the learning_rate is set to ‘invscaling’. Only used when solver=’sgd’. max_iterint, default=200 Maximum number of iterations. The solver iterates until convergence (determined by ‘tol’) or this number of iterations. For stochastic solvers (‘sgd’, ‘adam’), note that this determines the number of epochs (how many times each data point will be used), not the number of gradient steps. shufflebool, default=True Whether to shuffle samples in each iteration. Only used when solver=’sgd’ or ‘adam’. random_stateint, RandomState instance, default=None Determines random number generation for weights and bias initialization, train-test split if early stopping is used, and batch sampling when solver=’sgd’ or ‘adam’. Pass an int for reproducible results across multiple function calls. See Glossary. tolfloat, default=1e-4 Tolerance for the optimization. When the loss or score is not improving by at least tol for n_iter_no_change consecutive iterations, unless learning_rate is set to ‘adaptive’, convergence is considered to be reached and training stops. verbosebool, default=False Whether to print progress messages to stdout. warm_startbool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary. momentumfloat, default=0.9 Momentum for gradient descent update. Should be between 0 and 1. Only used when solver=’sgd’. nesterovs_momentumbool, default=True Whether to use Nesterov’s momentum. Only used when solver=’sgd’ and momentum > 0. early_stoppingbool, default=False Whether to use early stopping to terminate training when validation score is not improving. If set to true, it will automatically set aside 10% of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs. Only effective when solver=’sgd’ or ‘adam’ validation_fractionfloat, default=0.1 The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True beta_1float, default=0.9 Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1). Only used when solver=’adam’ beta_2float, default=0.999 Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1). Only used when solver=’adam’ epsilonfloat, default=1e-8 Value for numerical stability in adam. Only used when solver=’adam’ n_iter_no_changeint, default=10 Maximum number of epochs to not meet tol improvement. Only effective when solver=’sgd’ or ‘adam’ New in version 0.20. max_funint, default=15000 Only used when solver=’lbfgs’. Maximum number of function calls. The solver iterates until convergence (determined by ‘tol’), number of iterations reaches max_iter, or this number of function calls. Note that number of function calls will be greater than or equal to the number of iterations for the MLPRegressor. New in version 0.22. Attributes loss_float The current loss computed with the loss function. best_loss_float The minimum loss reached by the solver throughout fitting. loss_curve_list of shape (n_iter_,) The ith element in the list represents the loss at the ith iteration. t_int The number of training samples seen by the solver during fitting. coefs_list of shape (n_layers - 1,) The ith element in the list represents the weight matrix corresponding to layer i. intercepts_list of shape (n_layers - 1,) The ith element in the list represents the bias vector corresponding to layer i + 1. n_iter_int The number of iterations the solver has ran. n_layers_int Number of layers. n_outputs_int Number of outputs. out_activation_str Name of the output activation function. loss_curve_list of shape (n_iters,) Loss value evaluated at the end of each training step. t_int Mathematically equals n_iters * X.shape[0], it means time_step and it is used by optimizer’s learning rate scheduler. Notes MLPRegressor trains iteratively since at each time step the partial derivatives of the loss function with respect to the model parameters are computed to update the parameters. It can also have a regularization term added to the loss function that shrinks model parameters to prevent overfitting. This implementation works with data represented as dense and sparse numpy arrays of floating point values. References Hinton, Geoffrey E. “Connectionist learning procedures.” Artificial intelligence 40.1 (1989): 185-234. Glorot, Xavier, and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks.” International Conference on Artificial Intelligence and Statistics. 2010. He, Kaiming, et al. “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” arXiv preprint arXiv:1502.01852 (2015). Kingma, Diederik, and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). Examples >>> from sklearn.neural_network import MLPRegressor >>> from sklearn.datasets import make_regression >>> from sklearn.model_selection import train_test_split >>> X, y = make_regression(n_samples=200, random_state=1) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=1) >>> regr = MLPRegressor(random_state=1, max_iter=500).fit(X_train, y_train) >>> regr.predict(X_test[:2]) array([-0.9..., -7.1...]) >>> regr.score(X_test, y_test) 0.4... Methods fit(X, y) Fit the model to data matrix X and target(s) y. get_params([deep]) Get parameters for this estimator. predict(X) Predict using the multi-layer perceptron model. score(X, y[, sample_weight]) Return the coefficient of determination \(R^2\) of the prediction. set_params(**params) Set the parameters of this estimator. fit(X, y) [source] Fit the model to data matrix X and target(s) y. Parameters Xndarray or sparse matrix of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) or (n_samples, n_outputs) The target values (class labels in classification, real numbers in regression). Returns selfreturns a trained MLP model. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. property partial_fit Update the model with a single iteration over the given data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) The target values. Returns selfreturns a trained MLP model. predict(X) [source] Predict using the multi-layer perceptron model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns yndarray of shape (n_samples, n_outputs) The predicted values. score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor). set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.neural_network.MLPRegressor Partial Dependence and Individual Conditional Expectation Plots Advanced Plotting With Partial Dependence
sklearn.modules.generated.sklearn.neural_network.mlpregressor
fit(X, y) [source] Fit the model to data matrix X and target(s) y. Parameters Xndarray or sparse matrix of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) or (n_samples, n_outputs) The target values (class labels in classification, real numbers in regression). Returns selfreturns a trained MLP model.
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor.fit
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor.get_params
property partial_fit Update the model with a single iteration over the given data. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. yndarray of shape (n_samples,) The target values. Returns selfreturns a trained MLP model.
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor.partial_fit
predict(X) [source] Predict using the multi-layer perceptron model. Parameters X{array-like, sparse matrix} of shape (n_samples, n_features) The input data. Returns yndarray of shape (n_samples, n_outputs) The predicted values.
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor.predict
score(X, y, sample_weight=None) [source] Return the coefficient of determination \(R^2\) of the prediction. The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0. Parameters Xarray-like of shape (n_samples, n_features) Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator. yarray-like of shape (n_samples,) or (n_samples, n_outputs) True values for X. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns scorefloat \(R^2\) of self.predict(X) wrt. y. Notes The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor.score
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.neural_network.mlpregressor#sklearn.neural_network.MLPRegressor.set_params
class sklearn.pipeline.FeatureUnion(transformer_list, *, n_jobs=None, transformer_weights=None, verbose=False) [source] Concatenates results of multiple transformer objects. This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer. Parameters of the transformers may be set using its name and the parameter name separated by a ‘__’. A transformer may be replaced entirely by setting the parameter with its name to another transformer, or removed by setting to ‘drop’. Read more in the User Guide. New in version 0.13. Parameters transformer_listlist of (string, transformer) tuples List of transformer objects to be applied to the data. The first half of each tuple is the name of the transformer. The tranformer can be ‘drop’ for it to be ignored. Changed in version 0.22: Deprecated None as a transformer in favor of ‘drop’. n_jobsint, default=None Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None transformer_weightsdict, default=None Multiplicative weights for features per transformer. Keys are transformer names, values the weights. Raises ValueError if key not present in transformer_list. verbosebool, default=False If True, the time elapsed while fitting each transformer will be printed as it is completed. Attributes n_features_in_ See also make_union Convenience function for simplified feature union construction. Examples >>> from sklearn.pipeline import FeatureUnion >>> from sklearn.decomposition import PCA, TruncatedSVD >>> union = FeatureUnion([("pca", PCA(n_components=1)), ... ("svd", TruncatedSVD(n_components=2))]) >>> X = [[0., 1., 3], [2., 2., 5]] >>> union.fit_transform(X) array([[ 1.5 , 3.0..., 0.8...], [-1.5 , 5.7..., -0.4...]]) Methods fit(X[, y]) Fit all transformers using X. fit_transform(X[, y]) Fit all transformers, transform the data and concatenate results. get_feature_names() Get feature names from all transformers. get_params([deep]) Get parameters for this estimator. set_params(**kwargs) Set the parameters of this estimator. transform(X) Transform X separately by each transformer, concatenate results. fit(X, y=None, **fit_params) [source] Fit all transformers using X. Parameters Xiterable or array-like, depending on transformers Input data, used to fit transformers. yarray-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. Returns selfFeatureUnion This estimator fit_transform(X, y=None, **fit_params) [source] Fit all transformers, transform the data and concatenate results. Parameters Xiterable or array-like, depending on transformers Input data to be transformed. yarray-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. Returns X_tarray-like or sparse matrix of shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. get_feature_names() [source] Get feature names from all transformers. Returns feature_nameslist of strings Names of the features produced by transform. get_params(deep=True) [source] Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformer_list of the FeatureUnion. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of string to any Parameter names mapped to their values. set_params(**kwargs) [source] Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in tranformer_list. Returns self transform(X) [source] Transform X separately by each transformer, concatenate results. Parameters Xiterable or array-like, depending on transformers Input data to be transformed. Returns X_tarray-like or sparse matrix of shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion
sklearn.pipeline.FeatureUnion class sklearn.pipeline.FeatureUnion(transformer_list, *, n_jobs=None, transformer_weights=None, verbose=False) [source] Concatenates results of multiple transformer objects. This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer. Parameters of the transformers may be set using its name and the parameter name separated by a ‘__’. A transformer may be replaced entirely by setting the parameter with its name to another transformer, or removed by setting to ‘drop’. Read more in the User Guide. New in version 0.13. Parameters transformer_listlist of (string, transformer) tuples List of transformer objects to be applied to the data. The first half of each tuple is the name of the transformer. The tranformer can be ‘drop’ for it to be ignored. Changed in version 0.22: Deprecated None as a transformer in favor of ‘drop’. n_jobsint, default=None Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None transformer_weightsdict, default=None Multiplicative weights for features per transformer. Keys are transformer names, values the weights. Raises ValueError if key not present in transformer_list. verbosebool, default=False If True, the time elapsed while fitting each transformer will be printed as it is completed. Attributes n_features_in_ See also make_union Convenience function for simplified feature union construction. Examples >>> from sklearn.pipeline import FeatureUnion >>> from sklearn.decomposition import PCA, TruncatedSVD >>> union = FeatureUnion([("pca", PCA(n_components=1)), ... ("svd", TruncatedSVD(n_components=2))]) >>> X = [[0., 1., 3], [2., 2., 5]] >>> union.fit_transform(X) array([[ 1.5 , 3.0..., 0.8...], [-1.5 , 5.7..., -0.4...]]) Methods fit(X[, y]) Fit all transformers using X. fit_transform(X[, y]) Fit all transformers, transform the data and concatenate results. get_feature_names() Get feature names from all transformers. get_params([deep]) Get parameters for this estimator. set_params(**kwargs) Set the parameters of this estimator. transform(X) Transform X separately by each transformer, concatenate results. fit(X, y=None, **fit_params) [source] Fit all transformers using X. Parameters Xiterable or array-like, depending on transformers Input data, used to fit transformers. yarray-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. Returns selfFeatureUnion This estimator fit_transform(X, y=None, **fit_params) [source] Fit all transformers, transform the data and concatenate results. Parameters Xiterable or array-like, depending on transformers Input data to be transformed. yarray-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. Returns X_tarray-like or sparse matrix of shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. get_feature_names() [source] Get feature names from all transformers. Returns feature_nameslist of strings Names of the features produced by transform. get_params(deep=True) [source] Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformer_list of the FeatureUnion. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of string to any Parameter names mapped to their values. set_params(**kwargs) [source] Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in tranformer_list. Returns self transform(X) [source] Transform X separately by each transformer, concatenate results. Parameters Xiterable or array-like, depending on transformers Input data to be transformed. Returns X_tarray-like or sparse matrix of shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers. Examples using sklearn.pipeline.FeatureUnion Concatenating multiple feature extraction methods
sklearn.modules.generated.sklearn.pipeline.featureunion
fit(X, y=None, **fit_params) [source] Fit all transformers using X. Parameters Xiterable or array-like, depending on transformers Input data, used to fit transformers. yarray-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. Returns selfFeatureUnion This estimator
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion.fit
fit_transform(X, y=None, **fit_params) [source] Fit all transformers, transform the data and concatenate results. Parameters Xiterable or array-like, depending on transformers Input data to be transformed. yarray-like of shape (n_samples, n_outputs), default=None Targets for supervised learning. Returns X_tarray-like or sparse matrix of shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion.fit_transform
get_feature_names() [source] Get feature names from all transformers. Returns feature_nameslist of strings Names of the features produced by transform.
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion.get_feature_names
get_params(deep=True) [source] Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the transformer_list of the FeatureUnion. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of string to any Parameter names mapped to their values.
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion.get_params
set_params(**kwargs) [source] Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in tranformer_list. Returns self
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion.set_params
transform(X) [source] Transform X separately by each transformer, concatenate results. Parameters Xiterable or array-like, depending on transformers Input data to be transformed. Returns X_tarray-like or sparse matrix of shape (n_samples, sum_n_components) hstack of results of transformers. sum_n_components is the sum of n_components (output dimension) over transformers.
sklearn.modules.generated.sklearn.pipeline.featureunion#sklearn.pipeline.FeatureUnion.transform
sklearn.pipeline.make_pipeline(*steps, memory=None, verbose=False) [source] Construct a Pipeline from the given estimators. This is a shorthand for the Pipeline constructor; it does not require, and does not permit, naming the estimators. Instead, their names will be set to the lowercase of their types automatically. Parameters *stepslist of estimators. memorystr or object with the joblib.Memory interface, default=None Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute named_steps or steps to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming. verbosebool, default=False If True, the time elapsed while fitting each step will be printed as it is completed. Returns pPipeline See also Pipeline Class for creating a pipeline of transforms with a final estimator. Examples >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.preprocessing import StandardScaler >>> make_pipeline(StandardScaler(), GaussianNB(priors=None)) Pipeline(steps=[('standardscaler', StandardScaler()), ('gaussiannb', GaussianNB())])
sklearn.modules.generated.sklearn.pipeline.make_pipeline#sklearn.pipeline.make_pipeline
sklearn.pipeline.make_union(*transformers, n_jobs=None, verbose=False) [source] Construct a FeatureUnion from the given transformers. This is a shorthand for the FeatureUnion constructor; it does not require, and does not permit, naming the transformers. Instead, they will be given names automatically based on their types. It also does not allow weighting. Parameters *transformerslist of estimators n_jobsint, default=None Number of jobs to run in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. Changed in version v0.20: n_jobs default changed from 1 to None verbosebool, default=False If True, the time elapsed while fitting each transformer will be printed as it is completed. Returns fFeatureUnion See also FeatureUnion Class for concatenating the results of multiple transformer objects. Examples >>> from sklearn.decomposition import PCA, TruncatedSVD >>> from sklearn.pipeline import make_union >>> make_union(PCA(), TruncatedSVD()) FeatureUnion(transformer_list=[('pca', PCA()), ('truncatedsvd', TruncatedSVD())])
sklearn.modules.generated.sklearn.pipeline.make_union#sklearn.pipeline.make_union
class sklearn.pipeline.Pipeline(steps, *, memory=None, verbose=False) [source] Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be ‘transforms’, that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using memory argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a ‘__’, as in the example below. A step’s estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to ‘passthrough’ or None. Read more in the User Guide. New in version 0.5. Parameters stepslist List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator. memorystr or object with the joblib.Memory interface, default=None Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute named_steps or steps to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming. verbosebool, default=False If True, the time elapsed while fitting each step will be printed as it is completed. Attributes named_stepsBunch Dictionary-like object, with the following attributes. Read-only attribute to access any step parameter by user given name. Keys are step names and values are steps parameters. See also make_pipeline Convenience function for simplified pipeline construction. Examples >>> from sklearn.svm import SVC >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import train_test_split >>> from sklearn.pipeline import Pipeline >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> pipe = Pipeline([('scaler', StandardScaler()), ('svc', SVC())]) >>> # The pipeline can be used as any other estimator >>> # and avoids leaking the test set into the train set >>> pipe.fit(X_train, y_train) Pipeline(steps=[('scaler', StandardScaler()), ('svc', SVC())]) >>> pipe.score(X_test, y_test) 0.88 Methods decision_function(X) Apply transforms, and decision_function of the final estimator fit(X[, y]) Fit the model fit_predict(X[, y]) Applies fit_predict of last step in pipeline after transforms. fit_transform(X[, y]) Fit the model and transform with the final estimator get_params([deep]) Get parameters for this estimator. predict(X, **predict_params) Apply transforms to the data, and predict with the final estimator predict_log_proba(X) Apply transforms, and predict_log_proba of the final estimator predict_proba(X) Apply transforms, and predict_proba of the final estimator score(X[, y, sample_weight]) Apply transforms, and score with the final estimator score_samples(X) Apply transforms, and score_samples of the final estimator. set_params(**kwargs) Set the parameters of this estimator. decision_function(X) [source] Apply transforms, and decision_function of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorearray-like of shape (n_samples, n_classes) fit(X, y=None, **fit_params) [source] Fit the model Fit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns selfPipeline This estimator fit_predict(X, y=None, **fit_params) [source] Applies fit_predict of last step in pipeline after transforms. Applies fit_transforms of a pipeline to the data, followed by the fit_predict method of the final estimator in the pipeline. Valid only if the final estimator implements fit_predict. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns y_predarray-like fit_transform(X, y=None, **fit_params) [source] Fit the model and transform with the final estimator Fits all the transforms one after the other and transforms the data, then uses fit_transform on transformed data with the final estimator. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns Xtarray-like of shape (n_samples, n_transformed_features) Transformed samples get_params(deep=True) [source] Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the steps of the Pipeline. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of string to any Parameter names mapped to their values. property inverse_transform Apply inverse transformations in reverse order All estimators in the pipeline must support inverse_transform. Parameters Xtarray-like of shape (n_samples, n_transformed_features) Data samples, where n_samples is the number of samples and n_features is the number of features. Must fulfill input requirements of last step of pipeline’s inverse_transform method. Returns Xtarray-like of shape (n_samples, n_features) predict(X, **predict_params) [source] Apply transforms to the data, and predict with the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **predict_paramsdict of string -> object Parameters to the predict called at the end of all transformations in the pipeline. Note that while this may be used to return uncertainties from some models with return_std or return_cov, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. New in version 0.20. Returns y_predarray-like predict_log_proba(X) [source] Apply transforms, and predict_log_proba of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorearray-like of shape (n_samples, n_classes) predict_proba(X) [source] Apply transforms, and predict_proba of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_probaarray-like of shape (n_samples, n_classes) score(X, y=None, sample_weight=None) [source] Apply transforms, and score with the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Targets used for scoring. Must fulfill label requirements for all steps of the pipeline. sample_weightarray-like, default=None If not None, this argument is passed as sample_weight keyword argument to the score method of the final estimator. Returns scorefloat score_samples(X) [source] Apply transforms, and score_samples of the final estimator. Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorendarray of shape (n_samples,) set_params(**kwargs) [source] Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in steps. Returns self property transform Apply transforms, and transform with the final estimator This also works where final estimator is None: all prior transformations are applied. Parameters Xiterable Data to transform. Must fulfill input requirements of first step of the pipeline. Returns Xtarray-like of shape (n_samples, n_transformed_features)
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline
sklearn.pipeline.Pipeline class sklearn.pipeline.Pipeline(steps, *, memory=None, verbose=False) [source] Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be ‘transforms’, that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using memory argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a ‘__’, as in the example below. A step’s estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to ‘passthrough’ or None. Read more in the User Guide. New in version 0.5. Parameters stepslist List of (name, transform) tuples (implementing fit/transform) that are chained, in the order in which they are chained, with the last object an estimator. memorystr or object with the joblib.Memory interface, default=None Used to cache the fitted transformers of the pipeline. By default, no caching is performed. If a string is given, it is the path to the caching directory. Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. Use the attribute named_steps or steps to inspect estimators within the pipeline. Caching the transformers is advantageous when fitting is time consuming. verbosebool, default=False If True, the time elapsed while fitting each step will be printed as it is completed. Attributes named_stepsBunch Dictionary-like object, with the following attributes. Read-only attribute to access any step parameter by user given name. Keys are step names and values are steps parameters. See also make_pipeline Convenience function for simplified pipeline construction. Examples >>> from sklearn.svm import SVC >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import train_test_split >>> from sklearn.pipeline import Pipeline >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> pipe = Pipeline([('scaler', StandardScaler()), ('svc', SVC())]) >>> # The pipeline can be used as any other estimator >>> # and avoids leaking the test set into the train set >>> pipe.fit(X_train, y_train) Pipeline(steps=[('scaler', StandardScaler()), ('svc', SVC())]) >>> pipe.score(X_test, y_test) 0.88 Methods decision_function(X) Apply transforms, and decision_function of the final estimator fit(X[, y]) Fit the model fit_predict(X[, y]) Applies fit_predict of last step in pipeline after transforms. fit_transform(X[, y]) Fit the model and transform with the final estimator get_params([deep]) Get parameters for this estimator. predict(X, **predict_params) Apply transforms to the data, and predict with the final estimator predict_log_proba(X) Apply transforms, and predict_log_proba of the final estimator predict_proba(X) Apply transforms, and predict_proba of the final estimator score(X[, y, sample_weight]) Apply transforms, and score with the final estimator score_samples(X) Apply transforms, and score_samples of the final estimator. set_params(**kwargs) Set the parameters of this estimator. decision_function(X) [source] Apply transforms, and decision_function of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorearray-like of shape (n_samples, n_classes) fit(X, y=None, **fit_params) [source] Fit the model Fit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns selfPipeline This estimator fit_predict(X, y=None, **fit_params) [source] Applies fit_predict of last step in pipeline after transforms. Applies fit_transforms of a pipeline to the data, followed by the fit_predict method of the final estimator in the pipeline. Valid only if the final estimator implements fit_predict. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns y_predarray-like fit_transform(X, y=None, **fit_params) [source] Fit the model and transform with the final estimator Fits all the transforms one after the other and transforms the data, then uses fit_transform on transformed data with the final estimator. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns Xtarray-like of shape (n_samples, n_transformed_features) Transformed samples get_params(deep=True) [source] Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the steps of the Pipeline. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of string to any Parameter names mapped to their values. property inverse_transform Apply inverse transformations in reverse order All estimators in the pipeline must support inverse_transform. Parameters Xtarray-like of shape (n_samples, n_transformed_features) Data samples, where n_samples is the number of samples and n_features is the number of features. Must fulfill input requirements of last step of pipeline’s inverse_transform method. Returns Xtarray-like of shape (n_samples, n_features) predict(X, **predict_params) [source] Apply transforms to the data, and predict with the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **predict_paramsdict of string -> object Parameters to the predict called at the end of all transformations in the pipeline. Note that while this may be used to return uncertainties from some models with return_std or return_cov, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. New in version 0.20. Returns y_predarray-like predict_log_proba(X) [source] Apply transforms, and predict_log_proba of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorearray-like of shape (n_samples, n_classes) predict_proba(X) [source] Apply transforms, and predict_proba of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_probaarray-like of shape (n_samples, n_classes) score(X, y=None, sample_weight=None) [source] Apply transforms, and score with the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Targets used for scoring. Must fulfill label requirements for all steps of the pipeline. sample_weightarray-like, default=None If not None, this argument is passed as sample_weight keyword argument to the score method of the final estimator. Returns scorefloat score_samples(X) [source] Apply transforms, and score_samples of the final estimator. Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorendarray of shape (n_samples,) set_params(**kwargs) [source] Set the parameters of this estimator. Valid parameter keys can be listed with get_params(). Note that you can directly set the parameters of the estimators contained in steps. Returns self property transform Apply transforms, and transform with the final estimator This also works where final estimator is None: all prior transformations are applied. Parameters Xiterable Data to transform. Must fulfill input requirements of first step of the pipeline. Returns Xtarray-like of shape (n_samples, n_transformed_features) Examples using sklearn.pipeline.Pipeline Feature agglomeration vs. univariate selection Poisson regression and non-normal loss Permutation Importance vs Random Forest Feature Importance (MDI) Scalable learning with polynomial kernel aproximation Explicit feature map approximation for RBF kernels Underfitting vs. Overfitting Sample pipeline for text feature extraction and evaluation Balance model complexity and cross-validated score Caching nearest neighbors Comparing Nearest Neighbors with and without Neighborhood Components Analysis Restricted Boltzmann Machine features for digit classification Concatenating multiple feature extraction methods Pipelining: chaining a PCA and a logistic regression Selecting dimensionality reduction with Pipeline and GridSearchCV Column Transformer with Mixed Types Column Transformer with Heterogeneous Data Sources Semi-supervised Classification on a Text Dataset SVM-Anova: SVM with univariate feature selection Classification of text documents using sparse features
sklearn.modules.generated.sklearn.pipeline.pipeline
decision_function(X) [source] Apply transforms, and decision_function of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorearray-like of shape (n_samples, n_classes)
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.decision_function
fit(X, y=None, **fit_params) [source] Fit the model Fit all the transforms one after the other and transform the data, then fit the transformed data using the final estimator. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns selfPipeline This estimator
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.fit
fit_predict(X, y=None, **fit_params) [source] Applies fit_predict of last step in pipeline after transforms. Applies fit_transforms of a pipeline to the data, followed by the fit_predict method of the final estimator in the pipeline. Valid only if the final estimator implements fit_predict. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns y_predarray-like
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.fit_predict
fit_transform(X, y=None, **fit_params) [source] Fit the model and transform with the final estimator Fits all the transforms one after the other and transforms the data, then uses fit_transform on transformed data with the final estimator. Parameters Xiterable Training data. Must fulfill input requirements of first step of the pipeline. yiterable, default=None Training targets. Must fulfill label requirements for all steps of the pipeline. **fit_paramsdict of string -> object Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p. Returns Xtarray-like of shape (n_samples, n_transformed_features) Transformed samples
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.fit_transform
get_params(deep=True) [source] Get parameters for this estimator. Returns the parameters given in the constructor as well as the estimators contained within the steps of the Pipeline. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsmapping of string to any Parameter names mapped to their values.
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.get_params
property inverse_transform Apply inverse transformations in reverse order All estimators in the pipeline must support inverse_transform. Parameters Xtarray-like of shape (n_samples, n_transformed_features) Data samples, where n_samples is the number of samples and n_features is the number of features. Must fulfill input requirements of last step of pipeline’s inverse_transform method. Returns Xtarray-like of shape (n_samples, n_features)
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.inverse_transform
predict(X, **predict_params) [source] Apply transforms to the data, and predict with the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. **predict_paramsdict of string -> object Parameters to the predict called at the end of all transformations in the pipeline. Note that while this may be used to return uncertainties from some models with return_std or return_cov, uncertainties that are generated by the transformations in the pipeline are not propagated to the final estimator. New in version 0.20. Returns y_predarray-like
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.predict
predict_log_proba(X) [source] Apply transforms, and predict_log_proba of the final estimator Parameters Xiterable Data to predict on. Must fulfill input requirements of first step of the pipeline. Returns y_scorearray-like of shape (n_samples, n_classes)
sklearn.modules.generated.sklearn.pipeline.pipeline#sklearn.pipeline.Pipeline.predict_log_proba