doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
sklearn.metrics.fowlkes_mallows_score(labels_true, labels_pred, *, sparse=False) [source] Measure the similarity of two clusterings of a set of points. New in version 0.18. The Fowlkes-Mallows index (FMI) is defined as the geometric mean between of the precision and recall: FMI = TP / sqrt((TP + FP) * (TP + FN)) Where TP is the number of True Positive (i.e. the number of pair of points that belongs in the same clusters in both labels_true and labels_pred), FP is the number of False Positive (i.e. the number of pair of points that belongs in the same clusters in labels_true and not in labels_pred) and FN is the number of False Negative (i.e the number of pair of points that belongs in the same clusters in labels_pred and not in labels_True). The score ranges from 0 to 1. A high value indicates a good similarity between two clusters. Read more in the User Guide. Parameters labels_trueint array, shape = (n_samples,) A clustering of the data into disjoint subsets. labels_predarray, shape = (n_samples, ) A clustering of the data into disjoint subsets. sparsebool, default=False Compute contingency matrix internally with sparse matrix. Returns scorefloat The resulting Fowlkes-Mallows score. References 1 E. B. Fowkles and C. L. Mallows, 1983. “A method for comparing two hierarchical clusterings”. Journal of the American Statistical Association 2 Wikipedia entry for the Fowlkes-Mallows Index Examples Perfect labelings are both homogeneous and complete, hence have score 1.0: >>> from sklearn.metrics.cluster import fowlkes_mallows_score >>> fowlkes_mallows_score([0, 0, 1, 1], [0, 0, 1, 1]) 1.0 >>> fowlkes_mallows_score([0, 0, 1, 1], [1, 1, 0, 0]) 1.0 If classes members are completely split across different clusters, the assignment is totally random, hence the FMI is null: >>> fowlkes_mallows_score([0, 0, 0, 0], [0, 1, 2, 3]) 0.0
sklearn.modules.generated.sklearn.metrics.fowlkes_mallows_score#sklearn.metrics.fowlkes_mallows_score
sklearn.metrics.get_scorer(scoring) [source] Get a scorer from string. Read more in the User Guide. Parameters scoringstr or callable Scoring method as string. If callable it is returned as is. Returns scorercallable The scorer.
sklearn.modules.generated.sklearn.metrics.get_scorer#sklearn.metrics.get_scorer
sklearn.metrics.hamming_loss(y_true, y_pred, *, sample_weight=None) [source] Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. sample_weightarray-like of shape (n_samples,), default=None Sample weights. New in version 0.18. Returns lossfloat or int Return the average Hamming loss between element of y_true and y_pred. See also accuracy_score, jaccard_score, zero_one_loss Notes In multiclass classification, the Hamming loss corresponds to the Hamming distance between y_true and y_pred which is equivalent to the subset zero_one_loss function, when normalize parameter is set to True. In multilabel classification, the Hamming loss is different from the subset zero-one loss. The zero-one loss considers the entire set of labels for a given sample incorrect if it does not entirely match the true set of labels. Hamming loss is more forgiving in that it penalizes only the individual labels. The Hamming loss is upperbounded by the subset zero-one loss, when normalize parameter is set to True. It is always between 0 and 1, lower being better. References 1 Grigorios Tsoumakas, Ioannis Katakis. Multi-Label Classification: An Overview. International Journal of Data Warehousing & Mining, 3(3), 1-13, July-September 2007. 2 Wikipedia entry on the Hamming distance. Examples >>> from sklearn.metrics import hamming_loss >>> y_pred = [1, 2, 3, 4] >>> y_true = [2, 2, 3, 4] >>> hamming_loss(y_true, y_pred) 0.25 In the multilabel case with binary label indicators: >>> import numpy as np >>> hamming_loss(np.array([[0, 1], [1, 1]]), np.zeros((2, 2))) 0.75
sklearn.modules.generated.sklearn.metrics.hamming_loss#sklearn.metrics.hamming_loss
sklearn.metrics.hinge_loss(y_true, pred_decision, *, labels=None, sample_weight=None) [source] Average hinge loss (non-regularized). In binary class case, assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagree), implying 1 - margin is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier. In multiclass case, the function expects that either all the labels are included in y_true or an optional labels argument is provided which contains all the labels. The multilabel margin is calculated according to Crammer-Singer’s method. As in the binary case, the cumulated hinge loss is an upper bound of the number of mistakes made by the classifier. Read more in the User Guide. Parameters y_truearray of shape (n_samples,) True target, consisting of integers of two values. The positive label must be greater than the negative label. pred_decisionarray of shape (n_samples,) or (n_samples, n_classes) Predicted decisions, as output by decision_function (floats). labelsarray-like, default=None Contains all the labels for the problem. Used in multiclass hinge loss. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns lossfloat References 1 Wikipedia entry on the Hinge loss. 2 Koby Crammer, Yoram Singer. On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines. Journal of Machine Learning Research 2, (2001), 265-292. 3 L1 AND L2 Regularization for Multiclass Hinge Loss Models by Robert C. Moore, John DeNero. Examples >>> from sklearn import svm >>> from sklearn.metrics import hinge_loss >>> X = [[0], [1]] >>> y = [-1, 1] >>> est = svm.LinearSVC(random_state=0) >>> est.fit(X, y) LinearSVC(random_state=0) >>> pred_decision = est.decision_function([[-2], [3], [0.5]]) >>> pred_decision array([-2.18..., 2.36..., 0.09...]) >>> hinge_loss([-1, 1, 1], pred_decision) 0.30... In the multiclass case: >>> import numpy as np >>> X = np.array([[0], [1], [2], [3]]) >>> Y = np.array([0, 1, 2, 3]) >>> labels = np.array([0, 1, 2, 3]) >>> est = svm.LinearSVC() >>> est.fit(X, Y) LinearSVC() >>> pred_decision = est.decision_function([[-1], [2], [3]]) >>> y_true = [0, 2, 3] >>> hinge_loss(y_true, pred_decision, labels=labels) 0.56...
sklearn.modules.generated.sklearn.metrics.hinge_loss#sklearn.metrics.hinge_loss
sklearn.metrics.homogeneity_completeness_v_measure(labels_true, labels_pred, *, beta=1.0) [source] Compute the homogeneity and completeness and V-Measure scores at once. Those metrics are based on normalized conditional entropy measures of the clustering labeling to evaluate given the knowledge of a Ground Truth class labels of the same samples. A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster. Both scores have positive values between 0.0 and 1.0, larger values being desirable. Those 3 metrics are independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score values in any way. V-Measure is furthermore symmetric: swapping labels_true and label_pred will give the same score. This does not hold for homogeneity and completeness. V-Measure is identical to normalized_mutual_info_score with the arithmetic averaging method. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] ground truth class labels to be used as a reference labels_predarray-like of shape (n_samples,) cluster labels to evaluate betafloat, default=1.0 Ratio of weight attributed to homogeneity vs completeness. If beta is greater than 1, completeness is weighted more strongly in the calculation. If beta is less than 1, homogeneity is weighted more strongly. Returns homogeneityfloat score between 0.0 and 1.0. 1.0 stands for perfectly homogeneous labeling completenessfloat score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling v_measurefloat harmonic mean of the first two See also homogeneity_score completeness_score v_measure_score
sklearn.modules.generated.sklearn.metrics.homogeneity_completeness_v_measure#sklearn.metrics.homogeneity_completeness_v_measure
sklearn.metrics.homogeneity_score(labels_true, labels_pred) [source] Homogeneity metric of a cluster labeling given a ground truth. A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is not symmetric: switching label_true with label_pred will return the completeness_score which will be different in general. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] ground truth class labels to be used as a reference labels_predarray-like of shape (n_samples,) cluster labels to evaluate Returns homogeneityfloat score between 0.0 and 1.0. 1.0 stands for perfectly homogeneous labeling See also completeness_score v_measure_score References 1 Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A conditional entropy-based external cluster evaluation measure Examples Perfect labelings are homogeneous: >>> from sklearn.metrics.cluster import homogeneity_score >>> homogeneity_score([0, 0, 1, 1], [1, 1, 0, 0]) 1.0 Non-perfect labelings that further split classes into more clusters can be perfectly homogeneous: >>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 0, 1, 2])) 1.000000 >>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 1, 2, 3])) 1.000000 Clusters that include samples from different classes do not make for an homogeneous labeling: >>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 1, 0, 1])) 0.0... >>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 0, 0, 0])) 0.0...
sklearn.modules.generated.sklearn.metrics.homogeneity_score#sklearn.metrics.homogeneity_score
sklearn.metrics.jaccard_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Jaccard similarity coefficient score. The Jaccard index [1], or Jaccard similarity coefficient, defined as the size of the intersection divided by the size of the union of two label sets, is used to compare set of predicted labels for a sample to the corresponding set of labels in y_true. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. labelsarray-like of shape (n_classes,), default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{None, ‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’}, default=’binary’ If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, {0.0, 1.0}, default=”warn” Sets the value to return when there is a zero division, i.e. when there there are no negative values in predictions and labels. If set to “warn”, this acts like 0, but a warning is also raised. Returns scorefloat (if average is not None) or array of floats, shape = [n_unique_labels] See also accuracy_score, f_score, multilabel_confusion_matrix Notes jaccard_score may be a poor metric if there are no positives for some samples or classes. Jaccard is undefined if there are no true or predicted labels, and our implementation will return a score of 0 with a warning. References 1 Wikipedia entry for the Jaccard index. Examples >>> import numpy as np >>> from sklearn.metrics import jaccard_score >>> y_true = np.array([[0, 1, 1], ... [1, 1, 0]]) >>> y_pred = np.array([[1, 1, 1], ... [1, 0, 0]]) In the binary case: >>> jaccard_score(y_true[0], y_pred[0]) 0.6666... In the multilabel case: >>> jaccard_score(y_true, y_pred, average='samples') 0.5833... >>> jaccard_score(y_true, y_pred, average='macro') 0.6666... >>> jaccard_score(y_true, y_pred, average=None) array([0.5, 0.5, 1. ]) In the multiclass case: >>> y_pred = [0, 2, 1, 2] >>> y_true = [0, 1, 2, 2] >>> jaccard_score(y_true, y_pred, average=None) array([1. , 0. , 0.33...])
sklearn.modules.generated.sklearn.metrics.jaccard_score#sklearn.metrics.jaccard_score
sklearn.metrics.label_ranking_average_precision_score(y_true, y_score, *, sample_weight=None) [source] Compute ranking-based average precision. Label ranking average precision (LRAP) is the average over each ground truth label assigned to each sample, of the ratio of true vs. total labels with lower score. This metric is used in multilabel ranking problem, where the goal is to give better rank to the labels associated to each sample. The obtained score is always strictly greater than 0 and the best value is 1. Read more in the User Guide. Parameters y_true{ndarray, sparse matrix} of shape (n_samples, n_labels) True binary labels in binary indicator format. y_scorendarray of shape (n_samples, n_labels) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). sample_weightarray-like of shape (n_samples,), default=None Sample weights. New in version 0.20. Returns scorefloat Examples >>> import numpy as np >>> from sklearn.metrics import label_ranking_average_precision_score >>> y_true = np.array([[1, 0, 0], [0, 0, 1]]) >>> y_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]]) >>> label_ranking_average_precision_score(y_true, y_score) 0.416...
sklearn.modules.generated.sklearn.metrics.label_ranking_average_precision_score#sklearn.metrics.label_ranking_average_precision_score
sklearn.metrics.label_ranking_loss(y_true, y_score, *, sample_weight=None) [source] Compute Ranking loss measure. Compute the average number of label pairs that are incorrectly ordered given y_score weighted by the size of the label set and the number of labels not in the label set. This is similar to the error set size, but weighted by the number of relevant and irrelevant labels. The best performance is achieved with a ranking loss of zero. Read more in the User Guide. New in version 0.17: A function label_ranking_loss Parameters y_true{ndarray, sparse matrix} of shape (n_samples, n_labels) True binary labels in binary indicator format. y_scorendarray of shape (n_samples, n_labels) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns lossfloat References 1 Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
sklearn.modules.generated.sklearn.metrics.label_ranking_loss#sklearn.metrics.label_ranking_loss
sklearn.metrics.log_loss(y_true, y_pred, *, eps=1e-15, normalize=True, sample_weight=None, labels=None) [source] Log loss, aka logistic loss or cross-entropy loss. This is the loss function used in (multinomial) logistic regression and extensions of it such as neural networks, defined as the negative log-likelihood of a logistic model that returns y_pred probabilities for its training data y_true. The log loss is only defined for two or more labels. For a single sample with true label \(y \in \{0,1\}\) and and a probability estimate \(p = \operatorname{Pr}(y = 1)\), the log loss is: \[L_{\log}(y, p) = -(y \log (p) + (1 - y) \log (1 - p))\] Read more in the User Guide. Parameters y_truearray-like or label indicator matrix Ground truth (correct) labels for n_samples samples. y_predarray-like of float, shape = (n_samples, n_classes) or (n_samples,) Predicted probabilities, as returned by a classifier’s predict_proba method. If y_pred.shape = (n_samples,) the probabilities provided are assumed to be that of the positive class. The labels in y_pred are assumed to be ordered alphabetically, as done by preprocessing.LabelBinarizer. epsfloat, default=1e-15 Log loss is undefined for p=0 or p=1, so probabilities are clipped to max(eps, min(1 - eps, p)). normalizebool, default=True If true, return the mean loss per sample. Otherwise, return the sum of the per-sample losses. sample_weightarray-like of shape (n_samples,), default=None Sample weights. labelsarray-like, default=None If not provided, labels will be inferred from y_true. If labels is None and y_pred has shape (n_samples,) the labels are assumed to be binary and are inferred from y_true. New in version 0.18. Returns lossfloat Notes The logarithm used is the natural logarithm (base-e). References C.M. Bishop (2006). Pattern Recognition and Machine Learning. Springer, p. 209. Examples >>> from sklearn.metrics import log_loss >>> log_loss(["spam", "ham", "ham", "spam"], ... [[.1, .9], [.9, .1], [.8, .2], [.35, .65]]) 0.21616...
sklearn.modules.generated.sklearn.metrics.log_loss#sklearn.metrics.log_loss
sklearn.metrics.make_scorer(score_func, *, greater_is_better=True, needs_proba=False, needs_threshold=False, **kwargs) [source] Make a scorer from a performance metric or loss function. This factory function wraps scoring functions for use in GridSearchCV and cross_val_score. It takes a score function, such as accuracy_score, mean_squared_error, adjusted_rand_index or average_precision and returns a callable that scores an estimator’s output. The signature of the call is (estimator, X, y) where estimator is the model to be evaluated, X is the data and y is the ground truth labeling (or None in the case of unsupervised models). Read more in the User Guide. Parameters score_funccallable Score function (or loss function) with signature score_func(y, y_pred, **kwargs). greater_is_betterbool, default=True Whether score_func is a score function (default), meaning high is good, or a loss function, meaning low is good. In the latter case, the scorer object will sign-flip the outcome of the score_func. needs_probabool, default=False Whether score_func requires predict_proba to get probability estimates out of a classifier. If True, for binary y_true, the score function is supposed to accept a 1D y_pred (i.e., probability of the positive class, shape (n_samples,)). needs_thresholdbool, default=False Whether score_func takes a continuous decision certainty. This only works for binary classification using estimators that have either a decision_function or predict_proba method. If True, for binary y_true, the score function is supposed to accept a 1D y_pred (i.e., probability of the positive class or the decision function, shape (n_samples,)). For example average_precision or the area under the roc curve can not be computed using discrete predictions alone. **kwargsadditional arguments Additional parameters to be passed to score_func. Returns scorercallable Callable object that returns a scalar score; greater is better. Notes If needs_proba=False and needs_threshold=False, the score function is supposed to accept the output of predict. If needs_proba=True, the score function is supposed to accept the output of predict_proba (For binary y_true, the score function is supposed to accept probability of the positive class). If needs_threshold=True, the score function is supposed to accept the output of decision_function. Examples >>> from sklearn.metrics import fbeta_score, make_scorer >>> ftwo_scorer = make_scorer(fbeta_score, beta=2) >>> ftwo_scorer make_scorer(fbeta_score, beta=2) >>> from sklearn.model_selection import GridSearchCV >>> from sklearn.svm import LinearSVC >>> grid = GridSearchCV(LinearSVC(), param_grid={'C': [1, 10]}, ... scoring=ftwo_scorer)
sklearn.modules.generated.sklearn.metrics.make_scorer#sklearn.metrics.make_scorer
sklearn.metrics.matthews_corrcoef(y_true, y_pred, *, sample_weight=None) [source] Compute the Matthews correlation coefficient (MCC). The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary and multiclass classifications. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. The MCC is in essence a correlation coefficient value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 an inverse prediction. The statistic is also known as the phi coefficient. [source: Wikipedia] Binary and multiclass labels are supported. Only in the binary case does this relate to information about true and false positives and negatives. See references below. Read more in the User Guide. Parameters y_truearray, shape = [n_samples] Ground truth (correct) target values. y_predarray, shape = [n_samples] Estimated targets as returned by a classifier. sample_weightarray-like of shape (n_samples,), default=None Sample weights. New in version 0.18. Returns mccfloat The Matthews correlation coefficient (+1 represents a perfect prediction, 0 an average random prediction and -1 and inverse prediction). References 1 Baldi, Brunak, Chauvin, Andersen and Nielsen, (2000). Assessing the accuracy of prediction algorithms for classification: an overview. 2 Wikipedia entry for the Matthews Correlation Coefficient. 3 Gorodkin, (2004). Comparing two K-category assignments by a K-category correlation coefficient. 4 Jurman, Riccadonna, Furlanello, (2012). A Comparison of MCC and CEN Error Measures in MultiClass Prediction. Examples >>> from sklearn.metrics import matthews_corrcoef >>> y_true = [+1, +1, +1, -1] >>> y_pred = [+1, -1, +1, +1] >>> matthews_corrcoef(y_true, y_pred) -0.33...
sklearn.modules.generated.sklearn.metrics.matthews_corrcoef#sklearn.metrics.matthews_corrcoef
sklearn.metrics.max_error(y_true, y_pred) [source] max_error metric calculates the maximum residual error. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) Estimated target values. Returns max_errorfloat A positive floating point value (the best value is 0.0). Examples >>> from sklearn.metrics import max_error >>> y_true = [3, 2, 7, 1] >>> y_pred = [4, 2, 7, 1] >>> max_error(y_true, y_pred) 1
sklearn.modules.generated.sklearn.metrics.max_error#sklearn.metrics.max_error
sklearn.metrics.mean_absolute_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Mean absolute error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’ Defines aggregating of multiple output values. Array-like value defines weights used to average errors. ‘raw_values’ : Returns a full set of errors in case of multioutput input. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. Returns lossfloat or ndarray of floats If multioutput is ‘raw_values’, then mean absolute error is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average of all output errors is returned. MAE output is non-negative floating point. The best value is 0.0. Examples >>> from sklearn.metrics import mean_absolute_error >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> mean_absolute_error(y_true, y_pred) 0.5 >>> y_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y_pred = [[0, 2], [-1, 2], [8, -5]] >>> mean_absolute_error(y_true, y_pred) 0.75 >>> mean_absolute_error(y_true, y_pred, multioutput='raw_values') array([0.5, 1. ]) >>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7]) 0.85...
sklearn.modules.generated.sklearn.metrics.mean_absolute_error#sklearn.metrics.mean_absolute_error
sklearn.metrics.mean_absolute_percentage_error(y_true, y_pred, sample_weight=None, multioutput='uniform_average') [source] Mean absolute percentage error regression loss. Note here that we do not represent the output as a percentage in range [0, 100]. Instead, we represent it in range [0, 1/eps]. Read more in the User Guide. New in version 0.24. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’} or array-like Defines aggregating of multiple output values. Array-like value defines weights used to average errors. If input is list then the shape must be (n_outputs,). ‘raw_values’ : Returns a full set of errors in case of multioutput input. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. Returns lossfloat or ndarray of floats in the range [0, 1/eps] If multioutput is ‘raw_values’, then mean absolute percentage error is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average of all output errors is returned. MAPE output is non-negative floating point. The best value is 0.0. But note the fact that bad predictions can lead to arbitarily large MAPE values, especially if some y_true values are very close to zero. Note that we return a large value instead of inf when y_true is zero. Examples >>> from sklearn.metrics import mean_absolute_percentage_error >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> mean_absolute_percentage_error(y_true, y_pred) 0.3273... >>> y_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y_pred = [[0, 2], [-1, 2], [8, -5]] >>> mean_absolute_percentage_error(y_true, y_pred) 0.5515... >>> mean_absolute_percentage_error(y_true, y_pred, multioutput=[0.3, 0.7]) 0.6198...
sklearn.modules.generated.sklearn.metrics.mean_absolute_percentage_error#sklearn.metrics.mean_absolute_percentage_error
sklearn.metrics.mean_gamma_deviance(y_true, y_pred, *, sample_weight=None) [source] Mean Gamma deviance regression loss. Gamma deviance is equivalent to the Tweedie deviance with the power parameter power=2. It is invariant to scaling of the target variable, and measures relative errors. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. Requires y_true > 0. y_predarray-like of shape (n_samples,) Estimated target values. Requires y_pred > 0. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns lossfloat A non-negative floating point value (the best value is 0.0). Examples >>> from sklearn.metrics import mean_gamma_deviance >>> y_true = [2, 0.5, 1, 4] >>> y_pred = [0.5, 0.5, 2., 2.] >>> mean_gamma_deviance(y_true, y_pred) 1.0568...
sklearn.modules.generated.sklearn.metrics.mean_gamma_deviance#sklearn.metrics.mean_gamma_deviance
sklearn.metrics.mean_poisson_deviance(y_true, y_pred, *, sample_weight=None) [source] Mean Poisson deviance regression loss. Poisson deviance is equivalent to the Tweedie deviance with the power parameter power=1. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. Requires y_true >= 0. y_predarray-like of shape (n_samples,) Estimated target values. Requires y_pred > 0. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns lossfloat A non-negative floating point value (the best value is 0.0). Examples >>> from sklearn.metrics import mean_poisson_deviance >>> y_true = [2, 0, 1, 4] >>> y_pred = [0.5, 0.5, 2., 2.] >>> mean_poisson_deviance(y_true, y_pred) 1.4260...
sklearn.modules.generated.sklearn.metrics.mean_poisson_deviance#sklearn.metrics.mean_poisson_deviance
sklearn.metrics.mean_squared_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average', squared=True) [source] Mean squared error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’ Defines aggregating of multiple output values. Array-like value defines weights used to average errors. ‘raw_values’ : Returns a full set of errors in case of multioutput input. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. squaredbool, default=True If True returns MSE value, if False returns RMSE value. Returns lossfloat or ndarray of floats A non-negative floating point value (the best value is 0.0), or an array of floating point values, one for each individual target. Examples >>> from sklearn.metrics import mean_squared_error >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> mean_squared_error(y_true, y_pred) 0.375 >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> mean_squared_error(y_true, y_pred, squared=False) 0.612... >>> y_true = [[0.5, 1],[-1, 1],[7, -6]] >>> y_pred = [[0, 2],[-1, 2],[8, -5]] >>> mean_squared_error(y_true, y_pred) 0.708... >>> mean_squared_error(y_true, y_pred, squared=False) 0.822... >>> mean_squared_error(y_true, y_pred, multioutput='raw_values') array([0.41666667, 1. ]) >>> mean_squared_error(y_true, y_pred, multioutput=[0.3, 0.7]) 0.825...
sklearn.modules.generated.sklearn.metrics.mean_squared_error#sklearn.metrics.mean_squared_error
sklearn.metrics.mean_squared_log_error(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] Mean squared logarithmic error regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’ Defines aggregating of multiple output values. Array-like value defines weights used to average errors. ‘raw_values’ : Returns a full set of errors when the input is of multioutput format. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. Returns lossfloat or ndarray of floats A non-negative floating point value (the best value is 0.0), or an array of floating point values, one for each individual target. Examples >>> from sklearn.metrics import mean_squared_log_error >>> y_true = [3, 5, 2.5, 7] >>> y_pred = [2.5, 5, 4, 8] >>> mean_squared_log_error(y_true, y_pred) 0.039... >>> y_true = [[0.5, 1], [1, 2], [7, 6]] >>> y_pred = [[0.5, 2], [1, 2.5], [8, 8]] >>> mean_squared_log_error(y_true, y_pred) 0.044... >>> mean_squared_log_error(y_true, y_pred, multioutput='raw_values') array([0.00462428, 0.08377444]) >>> mean_squared_log_error(y_true, y_pred, multioutput=[0.3, 0.7]) 0.060...
sklearn.modules.generated.sklearn.metrics.mean_squared_log_error#sklearn.metrics.mean_squared_log_error
sklearn.metrics.mean_tweedie_deviance(y_true, y_pred, *, sample_weight=None, power=0) [source] Mean Tweedie deviance regression loss. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. powerfloat, default=0 Tweedie power parameter. Either power <= 0 or power >= 1. The higher p the less weight is given to extreme deviations between true and predicted targets. power < 0: Extreme stable distribution. Requires: y_pred > 0. power = 0 : Normal distribution, output corresponds to mean_squared_error. y_true and y_pred can be any real numbers. power = 1 : Poisson distribution. Requires: y_true >= 0 and y_pred > 0. 1 < p < 2 : Compound Poisson distribution. Requires: y_true >= 0 and y_pred > 0. power = 2 : Gamma distribution. Requires: y_true > 0 and y_pred > 0. power = 3 : Inverse Gaussian distribution. Requires: y_true > 0 and y_pred > 0. otherwise : Positive stable distribution. Requires: y_true > 0 and y_pred > 0. Returns lossfloat A non-negative floating point value (the best value is 0.0). Examples >>> from sklearn.metrics import mean_tweedie_deviance >>> y_true = [2, 0, 1, 4] >>> y_pred = [0.5, 0.5, 2., 2.] >>> mean_tweedie_deviance(y_true, y_pred, power=1) 1.4260...
sklearn.modules.generated.sklearn.metrics.mean_tweedie_deviance#sklearn.metrics.mean_tweedie_deviance
sklearn.metrics.median_absolute_error(y_true, y_pred, *, multioutput='uniform_average', sample_weight=None) [source] Median absolute error regression loss. Median absolute error output is non-negative floating point. The best value is 0.0. Read more in the User Guide. Parameters y_truearray-like of shape = (n_samples) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape = (n_samples) or (n_samples, n_outputs) Estimated target values. multioutput{‘raw_values’, ‘uniform_average’} or array-like of shape (n_outputs,), default=’uniform_average’ Defines aggregating of multiple output values. Array-like value defines weights used to average errors. ‘raw_values’ : Returns a full set of errors in case of multioutput input. ‘uniform_average’ : Errors of all outputs are averaged with uniform weight. sample_weightarray-like of shape (n_samples,), default=None Sample weights. New in version 0.24. Returns lossfloat or ndarray of floats If multioutput is ‘raw_values’, then mean absolute error is returned for each output separately. If multioutput is ‘uniform_average’ or an ndarray of weights, then the weighted average of all output errors is returned. Examples >>> from sklearn.metrics import median_absolute_error >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> median_absolute_error(y_true, y_pred) 0.5 >>> y_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y_pred = [[0, 2], [-1, 2], [8, -5]] >>> median_absolute_error(y_true, y_pred) 0.75 >>> median_absolute_error(y_true, y_pred, multioutput='raw_values') array([0.5, 1. ]) >>> median_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7]) 0.85
sklearn.modules.generated.sklearn.metrics.median_absolute_error#sklearn.metrics.median_absolute_error
sklearn.metrics.multilabel_confusion_matrix(y_true, y_pred, *, sample_weight=None, labels=None, samplewise=False) [source] Compute a confusion matrix for each class or sample. New in version 0.21. Compute class-wise (default) or sample-wise (samplewise=True) multilabel confusion matrix to evaluate the accuracy of a classification, and output confusion matrices for each class or sample. In multilabel confusion matrix \(MCM\), the count of true negatives is \(MCM_{:,0,0}\), false negatives is \(MCM_{:,1,0}\), true positives is \(MCM_{:,1,1}\) and false positives is \(MCM_{:,0,1}\). Multiclass data will be treated as if binarized under a one-vs-rest transformation. Returned confusion matrices will be in the order of sorted unique labels in the union of (y_true, y_pred). Read more in the User Guide. Parameters y_true{array-like, sparse matrix} of shape (n_samples, n_outputs) or (n_samples,) Ground truth (correct) target values. y_pred{array-like, sparse matrix} of shape (n_samples, n_outputs) or (n_samples,) Estimated targets as returned by a classifier. sample_weightarray-like of shape (n_samples,), default=None Sample weights. labelsarray-like of shape (n_classes,), default=None A list of classes or column indices to select some (or to force inclusion of classes absent from the data). samplewisebool, default=False In the multilabel case, this calculates a confusion matrix per sample. Returns multi_confusionndarray of shape (n_outputs, 2, 2) A 2x2 confusion matrix corresponding to each output in the input. When calculating class-wise multi_confusion (default), then n_outputs = n_labels; when calculating sample-wise multi_confusion (samplewise=True), n_outputs = n_samples. If labels is defined, the results will be returned in the order specified in labels, otherwise the results will be returned in sorted order by default. See also confusion_matrix Notes The multilabel_confusion_matrix calculates class-wise or sample-wise multilabel confusion matrices, and in multiclass tasks, labels are binarized under a one-vs-rest way; while confusion_matrix calculates one confusion matrix for confusion between every two classes. Examples Multilabel-indicator case: >>> import numpy as np >>> from sklearn.metrics import multilabel_confusion_matrix >>> y_true = np.array([[1, 0, 1], ... [0, 1, 0]]) >>> y_pred = np.array([[1, 0, 0], ... [0, 1, 1]]) >>> multilabel_confusion_matrix(y_true, y_pred) array([[[1, 0], [0, 1]], [[1, 0], [0, 1]], [[0, 1], [1, 0]]]) Multiclass case: >>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"] >>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"] >>> multilabel_confusion_matrix(y_true, y_pred, ... labels=["ant", "bird", "cat"]) array([[[3, 1], [0, 2]], [[5, 0], [1, 0]], [[2, 1], [1, 2]]])
sklearn.modules.generated.sklearn.metrics.multilabel_confusion_matrix#sklearn.metrics.multilabel_confusion_matrix
sklearn.metrics.mutual_info_score(labels_true, labels_pred, *, contingency=None) [source] Mutual Information between two clusterings. The Mutual Information is a measure of the similarity between two labels of the same data. Where \(|U_i|\) is the number of the samples in cluster \(U_i\) and \(|V_j|\) is the number of the samples in cluster \(V_j\), the Mutual Information between clusterings \(U\) and \(V\) is given as: \[MI(U,V)=\sum_{i=1}^{|U|} \sum_{j=1}^{|V|} \frac{|U_i\cap V_j|}{N} \log\frac{N|U_i \cap V_j|}{|U_i||V_j|}\] This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is furthermore symmetric: switching label_true with label_pred will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] A clustering of the data into disjoint subsets. labels_predint array-like of shape (n_samples,) A clustering of the data into disjoint subsets. contingency{ndarray, sparse matrix} of shape (n_classes_true, n_classes_pred), default=None A contingency matrix given by the contingency_matrix function. If value is None, it will be computed, otherwise the given value is used, with labels_true and labels_pred ignored. Returns mifloat Mutual information, a non-negative value See also adjusted_mutual_info_score Adjusted against chance Mutual Information. normalized_mutual_info_score Normalized Mutual Information. Notes The logarithm used is the natural logarithm (base-e).
sklearn.modules.generated.sklearn.metrics.mutual_info_score#sklearn.metrics.mutual_info_score
sklearn.metrics.ndcg_score(y_true, y_score, *, k=None, sample_weight=None, ignore_ties=False) [source] Compute Normalized Discounted Cumulative Gain. Sum the true scores ranked in the order induced by the predicted scores, after applying a logarithmic discount. Then divide by the best possible score (Ideal DCG, obtained for a perfect ranking) to obtain a score between 0 and 1. This ranking metric yields a high value if true labels are ranked high by y_score. Parameters y_truendarray of shape (n_samples, n_labels) True targets of multilabel classification, or true scores of entities to be ranked. y_scorendarray of shape (n_samples, n_labels) Target scores, can either be probability estimates, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). kint, default=None Only consider the highest k scores in the ranking. If None, use all outputs. sample_weightndarray of shape (n_samples,), default=None Sample weights. If None, all samples are given the same weight. ignore_tiesbool, default=False Assume that there are no ties in y_score (which is likely to be the case if y_score is continuous) for efficiency gains. Returns normalized_discounted_cumulative_gainfloat in [0., 1.] The averaged NDCG scores for all samples. See also dcg_score Discounted Cumulative Gain (not normalized). References Wikipedia entry for Discounted Cumulative Gain Jarvelin, K., & Kekalainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422-446. Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May). A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013) McSherry, F., & Najork, M. (2008, March). Computing information retrieval performance measures efficiently in the presence of tied scores. In European conference on information retrieval (pp. 414-421). Springer, Berlin, Heidelberg. Examples >>> from sklearn.metrics import ndcg_score >>> # we have groud-truth relevance of some answers to a query: >>> true_relevance = np.asarray([[10, 0, 0, 1, 5]]) >>> # we predict some scores (relevance) for the answers >>> scores = np.asarray([[.1, .2, .3, 4, 70]]) >>> ndcg_score(true_relevance, scores) 0.69... >>> scores = np.asarray([[.05, 1.1, 1., .5, .0]]) >>> ndcg_score(true_relevance, scores) 0.49... >>> # we can set k to truncate the sum; only top k answers contribute. >>> ndcg_score(true_relevance, scores, k=4) 0.35... >>> # the normalization takes k into account so a perfect answer >>> # would still get 1.0 >>> ndcg_score(true_relevance, true_relevance, k=4) 1.0 >>> # now we have some ties in our prediction >>> scores = np.asarray([[1, 0, 0, 0, 1]]) >>> # by default ties are averaged, so here we get the average (normalized) >>> # true relevance of our top predictions: (10 / 10 + 5 / 10) / 2 = .75 >>> ndcg_score(true_relevance, scores, k=1) 0.75 >>> # we can choose to ignore ties for faster results, but only >>> # if we know there aren't ties in our scores, otherwise we get >>> # wrong results: >>> ndcg_score(true_relevance, ... scores, k=1, ignore_ties=True) 0.5
sklearn.modules.generated.sklearn.metrics.ndcg_score#sklearn.metrics.ndcg_score
sklearn.metrics.normalized_mutual_info_score(labels_true, labels_pred, *, average_method='arithmetic') [source] Normalized Mutual Information between two clusterings. Normalized Mutual Information (NMI) is a normalization of the Mutual Information (MI) score to scale the results between 0 (no mutual information) and 1 (perfect correlation). In this function, mutual information is normalized by some generalized mean of H(labels_true) and H(labels_pred)), defined by the average_method. This measure is not adjusted for chance. Therefore adjusted_mutual_info_score might be preferred. This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is furthermore symmetric: switching label_true with label_pred will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] A clustering of the data into disjoint subsets. labels_predint array-like of shape (n_samples,) A clustering of the data into disjoint subsets. average_methodstr, default=’arithmetic’ How to compute the normalizer in the denominator. Possible options are ‘min’, ‘geometric’, ‘arithmetic’, and ‘max’. New in version 0.20. Changed in version 0.22: The default value of average_method changed from ‘geometric’ to ‘arithmetic’. Returns nmifloat score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling See also v_measure_score V-Measure (NMI with arithmetic mean option). adjusted_rand_score Adjusted Rand Index. adjusted_mutual_info_score Adjusted Mutual Information (adjusted against chance). Examples Perfect labelings are both homogeneous and complete, hence have score 1.0: >>> from sklearn.metrics.cluster import normalized_mutual_info_score >>> normalized_mutual_info_score([0, 0, 1, 1], [0, 0, 1, 1]) ... 1.0 >>> normalized_mutual_info_score([0, 0, 1, 1], [1, 1, 0, 0]) ... 1.0 If classes members are completely split across different clusters, the assignment is totally in-complete, hence the NMI is null: >>> normalized_mutual_info_score([0, 0, 0, 0], [0, 1, 2, 3]) ... 0.0
sklearn.modules.generated.sklearn.metrics.normalized_mutual_info_score#sklearn.metrics.normalized_mutual_info_score
sklearn.metrics.pairwise.additive_chi2_kernel(X, Y=None) [source] Computes the additive chi-squared kernel between observations in X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms. The chi-squared kernel is given by: k(x, y) = -Sum [(x - y)^2 / (x + y)] It can be interpreted as a weighted difference per entry. Read more in the User Guide. Parameters Xarray-like of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None Returns kernel_matrixndarray of shape (n_samples_X, n_samples_Y) See also chi2_kernel The exponentiated version of the kernel, which is usually preferable. sklearn.kernel_approximation.AdditiveChi2Sampler A Fourier approximation to this kernel. Notes As the negative of a distance, this kernel is only conditionally positive definite. References Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study International Journal of Computer Vision 2007 https://research.microsoft.com/en-us/um/people/manik/projects/trade-off/papers/ZhangIJCV06.pdf
sklearn.modules.generated.sklearn.metrics.pairwise.additive_chi2_kernel#sklearn.metrics.pairwise.additive_chi2_kernel
sklearn.metrics.pairwise.chi2_kernel(X, Y=None, gamma=1.0) [source] Computes the exponential chi-squared kernel X and Y. The chi-squared kernel is computed between each pair of rows in X and Y. X and Y have to be non-negative. This kernel is most commonly applied to histograms. The chi-squared kernel is given by: k(x, y) = exp(-gamma Sum [(x - y)^2 / (x + y)]) It can be interpreted as a weighted difference per entry. Read more in the User Guide. Parameters Xarray-like of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None gammafloat, default=1. Scaling parameter of the chi2 kernel. Returns kernel_matrixndarray of shape (n_samples_X, n_samples_Y) See also additive_chi2_kernel The additive version of this kernel. sklearn.kernel_approximation.AdditiveChi2Sampler A Fourier approximation to the additive version of this kernel. References Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study International Journal of Computer Vision 2007 https://research.microsoft.com/en-us/um/people/manik/projects/trade-off/papers/ZhangIJCV06.pdf
sklearn.modules.generated.sklearn.metrics.pairwise.chi2_kernel#sklearn.metrics.pairwise.chi2_kernel
sklearn.metrics.pairwise.cosine_distances(X, Y=None) [source] Compute cosine distance between samples in X and Y. Cosine distance is defined as 1.0 minus the cosine similarity. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples_X, n_features) Matrix X. Y{array-like, sparse matrix} of shape (n_samples_Y, n_features), default=None Matrix Y. Returns distance matrixndarray of shape (n_samples_X, n_samples_Y) See also cosine_similarity scipy.spatial.distance.cosine Dense matrices only.
sklearn.modules.generated.sklearn.metrics.pairwise.cosine_distances#sklearn.metrics.pairwise.cosine_distances
sklearn.metrics.pairwise.cosine_similarity(X, Y=None, dense_output=True) [source] Compute cosine similarity between samples in X and Y. Cosine similarity, or the cosine kernel, computes similarity as the normalized dot product of X and Y: K(X, Y) = <X, Y> / (||X||*||Y||) On L2-normalized data, this function is equivalent to linear_kernel. Read more in the User Guide. Parameters X{ndarray, sparse matrix} of shape (n_samples_X, n_features) Input data. Y{ndarray, sparse matrix} of shape (n_samples_Y, n_features), default=None Input data. If None, the output will be the pairwise similarities between all samples in X. dense_outputbool, default=True Whether to return dense output even when the input is sparse. If False, the output is sparse if both input arrays are sparse. New in version 0.17: parameter dense_output for dense output. Returns kernel matrixndarray of shape (n_samples_X, n_samples_Y)
sklearn.modules.generated.sklearn.metrics.pairwise.cosine_similarity#sklearn.metrics.pairwise.cosine_similarity
sklearn.metrics.pairwise.distance_metrics() [source] Valid metrics for pairwise_distances. This function simply returns the valid pairwise distance metrics. It exists to allow for a description of the mapping for each of the valid strings. The valid distance metrics, and the function they map to, are: metric Function ‘cityblock’ metrics.pairwise.manhattan_distances ‘cosine’ metrics.pairwise.cosine_distances ‘euclidean’ metrics.pairwise.euclidean_distances ‘haversine’ metrics.pairwise.haversine_distances ‘l1’ metrics.pairwise.manhattan_distances ‘l2’ metrics.pairwise.euclidean_distances ‘manhattan’ metrics.pairwise.manhattan_distances ‘nan_euclidean’ metrics.pairwise.nan_euclidean_distances Read more in the User Guide.
sklearn.modules.generated.sklearn.metrics.pairwise.distance_metrics#sklearn.metrics.pairwise.distance_metrics
sklearn.metrics.pairwise.euclidean_distances(X, Y=None, *, Y_norm_squared=None, squared=False, X_norm_squared=None) [source] Considering the rows of X (and Y=X) as vectors, compute the distance matrix between each pair of vectors. For efficiency reasons, the euclidean distance between a pair of row vector x and y is computed as: dist(x, y) = sqrt(dot(x, x) - 2 * dot(x, y) + dot(y, y)) This formulation has two advantages over other ways of computing distances. First, it is computationally efficient when dealing with sparse data. Second, if one argument varies but the other remains unchanged, then dot(x, x) and/or dot(y, y) can be pre-computed. However, this is not the most precise way of doing this computation, because this equation potentially suffers from “catastrophic cancellation”. Also, the distance matrix returned by this function may not be exactly symmetric as required by, e.g., scipy.spatial.distance functions. Read more in the User Guide. Parameters X{array-like, sparse matrix} of shape (n_samples_X, n_features) Y{array-like, sparse matrix} of shape (n_samples_Y, n_features), default=None Y_norm_squaredarray-like of shape (n_samples_Y,), default=None Pre-computed dot-products of vectors in Y (e.g., (Y**2).sum(axis=1)) May be ignored in some cases, see the note below. squaredbool, default=False Return squared Euclidean distances. X_norm_squaredarray-like of shape (n_samples,), default=None Pre-computed dot-products of vectors in X (e.g., (X**2).sum(axis=1)) May be ignored in some cases, see the note below. Returns distancesndarray of shape (n_samples_X, n_samples_Y) See also paired_distances Distances betweens pairs of elements of X and Y. Notes To achieve better accuracy, X_norm_squared and Y_norm_squared may be unused if they are passed as float32. Examples >>> from sklearn.metrics.pairwise import euclidean_distances >>> X = [[0, 1], [1, 1]] >>> # distance between rows of X >>> euclidean_distances(X, X) array([[0., 1.], [1., 0.]]) >>> # get distance to origin >>> euclidean_distances(X, [[0, 0]]) array([[1. ], [1.41421356]])
sklearn.modules.generated.sklearn.metrics.pairwise.euclidean_distances#sklearn.metrics.pairwise.euclidean_distances
sklearn.metrics.pairwise.haversine_distances(X, Y=None) [source] Compute the Haversine distance between samples in X and Y. The Haversine (or great circle) distance is the angular distance between two points on the surface of a sphere. The first coordinate of each point is assumed to be the latitude, the second is the longitude, given in radians. The dimension of the data must be 2. \[D(x, y) = 2\arcsin[\sqrt{\sin^2((x1 - y1) / 2) + \cos(x1)\cos(y1)\sin^2((x2 - y2) / 2)}]\] Parameters Xarray-like of shape (n_samples_X, 2) Yarray-like of shape (n_samples_Y, 2), default=None Returns distancendarray of shape (n_samples_X, n_samples_Y) Notes As the Earth is nearly spherical, the haversine formula provides a good approximation of the distance between two points of the Earth surface, with a less than 1% error on average. Examples We want to calculate the distance between the Ezeiza Airport (Buenos Aires, Argentina) and the Charles de Gaulle Airport (Paris, France). >>> from sklearn.metrics.pairwise import haversine_distances >>> from math import radians >>> bsas = [-34.83333, -58.5166646] >>> paris = [49.0083899664, 2.53844117956] >>> bsas_in_radians = [radians(_) for _ in bsas] >>> paris_in_radians = [radians(_) for _ in paris] >>> result = haversine_distances([bsas_in_radians, paris_in_radians]) >>> result * 6371000/1000 # multiply by Earth radius to get kilometers array([[ 0. , 11099.54035582], [11099.54035582, 0. ]])
sklearn.modules.generated.sklearn.metrics.pairwise.haversine_distances#sklearn.metrics.pairwise.haversine_distances
sklearn.metrics.pairwise.kernel_metrics() [source] Valid metrics for pairwise_kernels. This function simply returns the valid pairwise distance metrics. It exists, however, to allow for a verbose description of the mapping for each of the valid strings. The valid distance metrics, and the function they map to, are: metric Function ‘additive_chi2’ sklearn.pairwise.additive_chi2_kernel ‘chi2’ sklearn.pairwise.chi2_kernel ‘linear’ sklearn.pairwise.linear_kernel ‘poly’ sklearn.pairwise.polynomial_kernel ‘polynomial’ sklearn.pairwise.polynomial_kernel ‘rbf’ sklearn.pairwise.rbf_kernel ‘laplacian’ sklearn.pairwise.laplacian_kernel ‘sigmoid’ sklearn.pairwise.sigmoid_kernel ‘cosine’ sklearn.pairwise.cosine_similarity Read more in the User Guide.
sklearn.modules.generated.sklearn.metrics.pairwise.kernel_metrics#sklearn.metrics.pairwise.kernel_metrics
sklearn.metrics.pairwise.laplacian_kernel(X, Y=None, gamma=None) [source] Compute the laplacian kernel between X and Y. The laplacian kernel is defined as: K(x, y) = exp(-gamma ||x-y||_1) for each pair of rows x in X and y in Y. Read more in the User Guide. New in version 0.17. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None gammafloat, default=None If None, defaults to 1.0 / n_features. Returns kernel_matrixndarray of shape (n_samples_X, n_samples_Y)
sklearn.modules.generated.sklearn.metrics.pairwise.laplacian_kernel#sklearn.metrics.pairwise.laplacian_kernel
sklearn.metrics.pairwise.linear_kernel(X, Y=None, dense_output=True) [source] Compute the linear kernel between X and Y. Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None dense_outputbool, default=True Whether to return dense output even when the input is sparse. If False, the output is sparse if both input arrays are sparse. New in version 0.20. Returns Gram matrixndarray of shape (n_samples_X, n_samples_Y)
sklearn.modules.generated.sklearn.metrics.pairwise.linear_kernel#sklearn.metrics.pairwise.linear_kernel
sklearn.metrics.pairwise.manhattan_distances(X, Y=None, *, sum_over_features=True) [source] Compute the L1 distances between the vectors in X and Y. With sum_over_features equal to False it returns the componentwise distances. Read more in the User Guide. Parameters Xarray-like of shape (n_samples_X, n_features) Yarray-like of shape (n_samples_Y, n_features), default=None sum_over_featuresbool, default=True If True the function returns the pairwise distance matrix else it returns the componentwise L1 pairwise-distances. Not supported for sparse matrix inputs. Returns Dndarray of shape (n_samples_X * n_samples_Y, n_features) or (n_samples_X, n_samples_Y) If sum_over_features is False shape is (n_samples_X * n_samples_Y, n_features) and D contains the componentwise L1 pairwise-distances (ie. absolute difference), else shape is (n_samples_X, n_samples_Y) and D contains the pairwise L1 distances. Notes When X and/or Y are CSR sparse matrices and they are not already in canonical format, this function modifies them in-place to make them canonical. Examples >>> from sklearn.metrics.pairwise import manhattan_distances >>> manhattan_distances([[3]], [[3]]) array([[0.]]) >>> manhattan_distances([[3]], [[2]]) array([[1.]]) >>> manhattan_distances([[2]], [[3]]) array([[1.]]) >>> manhattan_distances([[1, 2], [3, 4]], [[1, 2], [0, 3]]) array([[0., 2.], [4., 4.]]) >>> import numpy as np >>> X = np.ones((1, 2)) >>> y = np.full((2, 2), 2.) >>> manhattan_distances(X, y, sum_over_features=False) array([[1., 1.], [1., 1.]])
sklearn.modules.generated.sklearn.metrics.pairwise.manhattan_distances#sklearn.metrics.pairwise.manhattan_distances
sklearn.metrics.pairwise.nan_euclidean_distances(X, Y=None, *, squared=False, missing_values=nan, copy=True) [source] Calculate the euclidean distances in the presence of missing values. Compute the euclidean distance between each pair of samples in X and Y, where Y=X is assumed if Y=None. When calculating the distance between a pair of samples, this formulation ignores feature coordinates with a missing value in either sample and scales up the weight of the remaining coordinates: dist(x,y) = sqrt(weight * sq. distance from present coordinates) where, weight = Total # of coordinates / # of present coordinates For example, the distance between [3, na, na, 6] and [1, na, 4, 5] is: \[\sqrt{\frac{4}{2}((3-1)^2 + (6-5)^2)}\] If all the coordinates are missing or if there are no common present coordinates then NaN is returned for that pair. Read more in the User Guide. New in version 0.22. Parameters Xarray-like of shape=(n_samples_X, n_features) Yarray-like of shape=(n_samples_Y, n_features), default=None squaredbool, default=False Return squared Euclidean distances. missing_valuesnp.nan or int, default=np.nan Representation of missing value. copybool, default=True Make and use a deep copy of X and Y (if Y exists). Returns distancesndarray of shape (n_samples_X, n_samples_Y) See also paired_distances Distances between pairs of elements of X and Y. References John K. Dixon, “Pattern Recognition with Partly Missing Data”, IEEE Transactions on Systems, Man, and Cybernetics, Volume: 9, Issue: 10, pp. 617 - 621, Oct. 1979. http://ieeexplore.ieee.org/abstract/document/4310090/ Examples >>> from sklearn.metrics.pairwise import nan_euclidean_distances >>> nan = float("NaN") >>> X = [[0, 1], [1, nan]] >>> nan_euclidean_distances(X, X) # distance between rows of X array([[0. , 1.41421356], [1.41421356, 0. ]]) >>> # get distance to origin >>> nan_euclidean_distances(X, [[0, 0]]) array([[1. ], [1.41421356]])
sklearn.modules.generated.sklearn.metrics.pairwise.nan_euclidean_distances#sklearn.metrics.pairwise.nan_euclidean_distances
sklearn.metrics.pairwise.paired_cosine_distances(X, Y) [source] Computes the paired cosine distances between X and Y. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Yarray-like of shape (n_samples, n_features) Returns distancesndarray of shape (n_samples,) Notes The cosine distance is equivalent to the half the squared euclidean distance if each sample is normalized to unit norm.
sklearn.modules.generated.sklearn.metrics.pairwise.paired_cosine_distances#sklearn.metrics.pairwise.paired_cosine_distances
sklearn.metrics.pairwise.paired_distances(X, Y, *, metric='euclidean', **kwds) [source] Computes the paired distances between X and Y. Computes the distances between (X[0], Y[0]), (X[1], Y[1]), etc… Read more in the User Guide. Parameters Xndarray of shape (n_samples, n_features) Array 1 for distance computation. Yndarray of shape (n_samples, n_features) Array 2 for distance computation. metricstr or callable, default=”euclidean” The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options specified in PAIRED_DISTANCES, including “euclidean”, “manhattan”, or “cosine”. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. Returns distancesndarray of shape (n_samples,) See also pairwise_distances Computes the distance between every pair of samples. Examples >>> from sklearn.metrics.pairwise import paired_distances >>> X = [[0, 1], [1, 1]] >>> Y = [[0, 1], [2, 1]] >>> paired_distances(X, Y) array([0., 1.])
sklearn.modules.generated.sklearn.metrics.pairwise.paired_distances#sklearn.metrics.pairwise.paired_distances
sklearn.metrics.pairwise.paired_euclidean_distances(X, Y) [source] Computes the paired euclidean distances between X and Y. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Yarray-like of shape (n_samples, n_features) Returns distancesndarray of shape (n_samples,)
sklearn.modules.generated.sklearn.metrics.pairwise.paired_euclidean_distances#sklearn.metrics.pairwise.paired_euclidean_distances
sklearn.metrics.pairwise.paired_manhattan_distances(X, Y) [source] Compute the L1 distances between the vectors in X and Y. Read more in the User Guide. Parameters Xarray-like of shape (n_samples, n_features) Yarray-like of shape (n_samples, n_features) Returns distancesndarray of shape (n_samples,)
sklearn.modules.generated.sklearn.metrics.pairwise.paired_manhattan_distances#sklearn.metrics.pairwise.paired_manhattan_distances
sklearn.metrics.pairwise.pairwise_kernels(X, Y=None, metric='linear', *, filter_params=False, n_jobs=None, **kwds) [source] Compute the kernel between arrays X and optional array Y. This method takes either a vector array or a kernel matrix, and returns a kernel matrix. If the input is a vector array, the kernels are computed. If the input is a kernel matrix, it is returned instead. This method provides a safe way to take a kernel matrix as input, while preserving compatibility with many other algorithms that take a vector array. If Y is given (default is None), then the returned matrix is the pairwise kernel between the arrays from both X and Y. Valid values for metric are: [‘additive_chi2’, ‘chi2’, ‘linear’, ‘poly’, ‘polynomial’, ‘rbf’, ‘laplacian’, ‘sigmoid’, ‘cosine’] Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_samples_X) or (n_samples_X, n_features) Array of pairwise kernels between samples, or a feature array. The shape of the array should be (n_samples_X, n_samples_X) if metric == “precomputed” and (n_samples_X, n_features) otherwise. Yndarray of shape (n_samples_Y, n_features), default=None A second feature array only if X has shape (n_samples_X, n_features). metricstr or callable, default=”linear” The metric to use when calculating kernel between instances in a feature array. If metric is a string, it must be one of the metrics in pairwise.PAIRWISE_KERNEL_FUNCTIONS. If metric is “precomputed”, X is assumed to be a kernel matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two rows from X as input and return the corresponding kernel value as a single number. This means that callables from sklearn.metrics.pairwise are not allowed, as they operate on matrices, not single samples. Use the string identifying the kernel instead. filter_paramsbool, default=False Whether to filter invalid parameters or not. n_jobsint, default=None The number of jobs to use for the computation. This works by breaking down the pairwise matrix into n_jobs even slices and computing them in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. **kwdsoptional keyword parameters Any further parameters are passed directly to the kernel function. Returns Kndarray of shape (n_samples_X, n_samples_X) or (n_samples_X, n_samples_Y) A kernel matrix K such that K_{i, j} is the kernel between the ith and jth vectors of the given matrix X, if Y is None. If Y is not None, then K_{i, j} is the kernel between the ith array from X and the jth array from Y. Notes If metric is ‘precomputed’, Y is ignored and X is returned.
sklearn.modules.generated.sklearn.metrics.pairwise.pairwise_kernels#sklearn.metrics.pairwise.pairwise_kernels
sklearn.metrics.pairwise.polynomial_kernel(X, Y=None, degree=3, gamma=None, coef0=1) [source] Compute the polynomial kernel between X and Y: K(X, Y) = (gamma <X, Y> + coef0)^degree Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None degreeint, default=3 gammafloat, default=None If None, defaults to 1.0 / n_features. coef0float, default=1 Returns Gram matrixndarray of shape (n_samples_X, n_samples_Y)
sklearn.modules.generated.sklearn.metrics.pairwise.polynomial_kernel#sklearn.metrics.pairwise.polynomial_kernel
sklearn.metrics.pairwise.rbf_kernel(X, Y=None, gamma=None) [source] Compute the rbf (gaussian) kernel between X and Y: K(x, y) = exp(-gamma ||x-y||^2) for each pair of rows x in X and y in Y. Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None gammafloat, default=None If None, defaults to 1.0 / n_features. Returns kernel_matrixndarray of shape (n_samples_X, n_samples_Y)
sklearn.modules.generated.sklearn.metrics.pairwise.rbf_kernel#sklearn.metrics.pairwise.rbf_kernel
sklearn.metrics.pairwise.sigmoid_kernel(X, Y=None, gamma=None, coef0=1) [source] Compute the sigmoid kernel between X and Y: K(X, Y) = tanh(gamma <X, Y> + coef0) Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_features) Yndarray of shape (n_samples_Y, n_features), default=None gammafloat, default=None If None, defaults to 1.0 / n_features. coef0float, default=1 Returns Gram matrixndarray of shape (n_samples_X, n_samples_Y)
sklearn.modules.generated.sklearn.metrics.pairwise.sigmoid_kernel#sklearn.metrics.pairwise.sigmoid_kernel
sklearn.metrics.pairwise_distances(X, Y=None, metric='euclidean', *, n_jobs=None, force_all_finite=True, **kwds) [source] Compute the distance matrix from a vector array X and optional Y. This method takes either a vector array or a distance matrix, and returns a distance matrix. If the input is a vector array, the distances are computed. If the input is a distances matrix, it is returned instead. This method provides a safe way to take a distance matrix as input, while preserving compatibility with many other algorithms that take a vector array. If Y is given (default is None), then the returned matrix is the pairwise distance between the arrays from both X and Y. Valid values for metric are: From scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’]. These metrics support sparse matrix inputs. [‘nan_euclidean’] but it does not yet support sparse matrices. From scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. These metrics do not support sparse matrix inputs. Note that in the case of ‘cityblock’, ‘cosine’ and ‘euclidean’ (which are valid scipy.spatial.distance metrics), the scikit-learn implementation will be used, which is faster and has support for sparse matrices (except for ‘cityblock’). For a verbose description of the metrics from scikit-learn, see the __doc__ of the sklearn.pairwise.distance_metrics function. Read more in the User Guide. Parameters Xndarray of shape (n_samples_X, n_samples_X) or (n_samples_X, n_features) Array of pairwise distances between samples, or a feature array. The shape of the array should be (n_samples_X, n_samples_X) if metric == “precomputed” and (n_samples_X, n_features) otherwise. Yndarray of shape (n_samples_Y, n_features), default=None An optional second feature array. Only allowed if metric != “precomputed”. metricstr or callable, default=’euclidean’ The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. n_jobsint, default=None The number of jobs to use for the computation. This works by breaking down the pairwise matrix into n_jobs even slices and computing them in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. force_all_finitebool or ‘allow-nan’, default=True Whether to raise an error on np.inf, np.nan, pd.NA in array. Ignored for a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. The possibilities are: True: Force all values of array to be finite. False: accepts np.inf, np.nan, pd.NA in array. ‘allow-nan’: accepts only np.nan and pd.NA values in array. Values cannot be infinite. New in version 0.22: force_all_finite accepts the string 'allow-nan'. Changed in version 0.23: Accepts pd.NA and converts it into np.nan. **kwdsoptional keyword parameters Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage examples. Returns Dndarray of shape (n_samples_X, n_samples_X) or (n_samples_X, n_samples_Y) A distance matrix D such that D_{i, j} is the distance between the ith and jth vectors of the given matrix X, if Y is None. If Y is not None, then D_{i, j} is the distance between the ith array from X and the jth array from Y. See also pairwise_distances_chunked Performs the same calculation as this function, but returns a generator of chunks of the distance matrix, in order to limit memory usage. paired_distances Computes the distances between corresponding elements of two arrays.
sklearn.modules.generated.sklearn.metrics.pairwise_distances#sklearn.metrics.pairwise_distances
sklearn.metrics.pairwise_distances_argmin(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source] Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (according to the specified distance). This is mostly equivalent to calling: pairwise_distances(X, Y=Y, metric=metric).argmin(axis=axis) but uses much less memory, and is faster for large arrays. This function works with dense 2D arrays only. Parameters Xarray-like of shape (n_samples_X, n_features) Array containing points. Yarray-like of shape (n_samples_Y, n_features) Arrays containing points. axisint, default=1 Axis along which the argmin and distances are to be computed. metricstr or callable, default=”euclidean” Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. metric_kwargsdict, default=None Keyword arguments to pass to specified metric function. Returns argminnumpy.ndarray Y[argmin[i], :] is the row in Y that is closest to X[i, :]. See also sklearn.metrics.pairwise_distances sklearn.metrics.pairwise_distances_argmin_min
sklearn.modules.generated.sklearn.metrics.pairwise_distances_argmin#sklearn.metrics.pairwise_distances_argmin
sklearn.metrics.pairwise_distances_argmin_min(X, Y, *, axis=1, metric='euclidean', metric_kwargs=None) [source] Compute minimum distances between one point and a set of points. This function computes for each row in X, the index of the row of Y which is closest (according to the specified distance). The minimal distances are also returned. This is mostly equivalent to calling: (pairwise_distances(X, Y=Y, metric=metric).argmin(axis=axis), pairwise_distances(X, Y=Y, metric=metric).min(axis=axis)) but uses much less memory, and is faster for large arrays. Parameters X{array-like, sparse matrix} of shape (n_samples_X, n_features) Array containing points. Y{array-like, sparse matrix} of shape (n_samples_Y, n_features) Array containing points. axisint, default=1 Axis along which the argmin and distances are to be computed. metricstr or callable, default=’euclidean’ Metric to use for distance computation. Any metric from scikit-learn or scipy.spatial.distance can be used. If metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays as input and return one value indicating the distance between them. This works for Scipy’s metrics, but is less efficient than passing the metric name as a string. Distance matrices are not supported. Valid values for metric are: from scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’] from scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. metric_kwargsdict, default=None Keyword arguments to pass to specified metric function. Returns argminndarray Y[argmin[i], :] is the row in Y that is closest to X[i, :]. distancesndarray distances[i] is the distance between the i-th row in X and the argmin[i]-th row in Y. See also sklearn.metrics.pairwise_distances sklearn.metrics.pairwise_distances_argmin
sklearn.modules.generated.sklearn.metrics.pairwise_distances_argmin_min#sklearn.metrics.pairwise_distances_argmin_min
sklearn.metrics.pairwise_distances_chunked(X, Y=None, *, reduce_func=None, metric='euclidean', n_jobs=None, working_memory=None, **kwds) [source] Generate a distance matrix chunk by chunk with optional reduction. In cases where not all of a pairwise distance matrix needs to be stored at once, this is used to calculate pairwise distances in working_memory-sized chunks. If reduce_func is given, it is run on each chunk and its return values are concatenated into lists, arrays or sparse matrices. Parameters Xndarray of shape (n_samples_X, n_samples_X) or (n_samples_X, n_features) Array of pairwise distances between samples, or a feature array. The shape the array should be (n_samples_X, n_samples_X) if metric=’precomputed’ and (n_samples_X, n_features) otherwise. Yndarray of shape (n_samples_Y, n_features), default=None An optional second feature array. Only allowed if metric != “precomputed”. reduce_funccallable, default=None The function which is applied on each chunk of the distance matrix, reducing it to needed values. reduce_func(D_chunk, start) is called repeatedly, where D_chunk is a contiguous vertical slice of the pairwise distance matrix, starting at row start. It should return one of: None; an array, a list, or a sparse matrix of length D_chunk.shape[0]; or a tuple of such objects. Returning None is useful for in-place operations, rather than reductions. If None, pairwise_distances_chunked returns a generator of vertical chunks of the distance matrix. metricstr or callable, default=’euclidean’ The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. n_jobsint, default=None The number of jobs to use for the computation. This works by breaking down the pairwise matrix into n_jobs even slices and computing them in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. working_memoryint, default=None The sought maximum memory for temporary distance matrix chunks. When None (default), the value of sklearn.get_config()['working_memory'] is used. `**kwds`optional keyword parameters Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage examples. Yields D_chunk{ndarray, sparse matrix} A contiguous slice of distance matrix, optionally processed by reduce_func. Examples Without reduce_func: >>> import numpy as np >>> from sklearn.metrics import pairwise_distances_chunked >>> X = np.random.RandomState(0).rand(5, 3) >>> D_chunk = next(pairwise_distances_chunked(X)) >>> D_chunk array([[0. ..., 0.29..., 0.41..., 0.19..., 0.57...], [0.29..., 0. ..., 0.57..., 0.41..., 0.76...], [0.41..., 0.57..., 0. ..., 0.44..., 0.90...], [0.19..., 0.41..., 0.44..., 0. ..., 0.51...], [0.57..., 0.76..., 0.90..., 0.51..., 0. ...]]) Retrieve all neighbors and average distance within radius r: >>> r = .2 >>> def reduce_func(D_chunk, start): ... neigh = [np.flatnonzero(d < r) for d in D_chunk] ... avg_dist = (D_chunk * (D_chunk < r)).mean(axis=1) ... return neigh, avg_dist >>> gen = pairwise_distances_chunked(X, reduce_func=reduce_func) >>> neigh, avg_dist = next(gen) >>> neigh [array([0, 3]), array([1]), array([2]), array([0, 3]), array([4])] >>> avg_dist array([0.039..., 0. , 0. , 0.039..., 0. ]) Where r is defined per sample, we need to make use of start: >>> r = [.2, .4, .4, .3, .1] >>> def reduce_func(D_chunk, start): ... neigh = [np.flatnonzero(d < r[i]) ... for i, d in enumerate(D_chunk, start)] ... return neigh >>> neigh = next(pairwise_distances_chunked(X, reduce_func=reduce_func)) >>> neigh [array([0, 3]), array([0, 1]), array([2]), array([0, 3]), array([4])] Force row-by-row generation by reducing working_memory: >>> gen = pairwise_distances_chunked(X, reduce_func=reduce_func, ... working_memory=0) >>> next(gen) [array([0, 3])] >>> next(gen) [array([0, 1])]
sklearn.modules.generated.sklearn.metrics.pairwise_distances_chunked#sklearn.metrics.pairwise_distances_chunked
sklearn.metrics.plot_confusion_matrix(estimator, X, y_true, *, labels=None, sample_weight=None, normalize=None, display_labels=None, include_values=True, xticks_rotation='horizontal', values_format=None, cmap='viridis', ax=None, colorbar=True) [source] Plot Confusion Matrix. Read more in the User Guide. Parameters estimatorestimator instance Fitted classifier or a fitted Pipeline in which the last estimator is a classifier. X{array-like, sparse matrix} of shape (n_samples, n_features) Input values. y_truearray-like of shape (n_samples,) Target values. labelsarray-like of shape (n_classes,), default=None List of labels to index the matrix. This may be used to reorder or select a subset of labels. If None is given, those that appear at least once in y_true or y_pred are used in sorted order. sample_weightarray-like of shape (n_samples,), default=None Sample weights. normalize{‘true’, ‘pred’, ‘all’}, default=None Normalizes confusion matrix over the true (rows), predicted (columns) conditions or all the population. If None, confusion matrix will not be normalized. display_labelsarray-like of shape (n_classes,), default=None Target names used for plotting. By default, labels will be used if it is defined, otherwise the unique labels of y_true and y_pred will be used. include_valuesbool, default=True Includes values in confusion matrix. xticks_rotation{‘vertical’, ‘horizontal’} or float, default=’horizontal’ Rotation of xtick labels. values_formatstr, default=None Format specification for values in confusion matrix. If None, the format specification is ‘d’ or ‘.2g’ whichever is shorter. cmapstr or matplotlib Colormap, default=’viridis’ Colormap recognized by matplotlib. axmatplotlib Axes, default=None Axes object to plot on. If None, a new figure and axes is created. colorbarbool, default=True Whether or not to add a colorbar to the plot. New in version 0.24. Returns displayConfusionMatrixDisplay See also confusion_matrix Compute Confusion Matrix to evaluate the accuracy of a classification. ConfusionMatrixDisplay Confusion Matrix visualization. Examples >>> import matplotlib.pyplot as plt >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import plot_confusion_matrix >>> from sklearn.model_selection import train_test_split >>> from sklearn.svm import SVC >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split( ... X, y, random_state=0) >>> clf = SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> plot_confusion_matrix(clf, X_test, y_test) >>> plt.show()
sklearn.modules.generated.sklearn.metrics.plot_confusion_matrix#sklearn.metrics.plot_confusion_matrix
sklearn.metrics.plot_det_curve(estimator, X, y, *, sample_weight=None, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source] Plot detection error tradeoff (DET) curve. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. New in version 0.24. Parameters estimatorestimator instance Fitted classifier or a fitted Pipeline in which the last estimator is a classifier. X{array-like, sparse matrix} of shape (n_samples, n_features) Input values. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. response_method{‘predict_proba’, ‘decision_function’, ‘auto’} default=’auto’ Specifies whether to use predict_proba or decision_function as the predicted target response. If set to ‘auto’, predict_proba is tried first and if it does not exist decision_function is tried next. namestr, default=None Name of DET curve for labeling. If None, use the name of the estimator. axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. pos_labelstr or int, default=None The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised. Returns displayDetCurveDisplay Object that stores computed values. See also det_curve Compute error rates for different probability thresholds. DetCurveDisplay DET curve visualization. plot_roc_curve Plot Receiver operating characteristic (ROC) curve. Examples >>> import matplotlib.pyplot as plt >>> from sklearn import datasets, metrics, model_selection, svm >>> X, y = datasets.make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = model_selection.train_test_split( ... X, y, random_state=0) >>> clf = svm.SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> metrics.plot_det_curve(clf, X_test, y_test) >>> plt.show()
sklearn.modules.generated.sklearn.metrics.plot_det_curve#sklearn.metrics.plot_det_curve
sklearn.metrics.plot_precision_recall_curve(estimator, X, y, *, sample_weight=None, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source] Plot Precision Recall Curve for binary classifiers. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. Parameters estimatorestimator instance Fitted classifier or a fitted Pipeline in which the last estimator is a classifier. X{array-like, sparse matrix} of shape (n_samples, n_features) Input values. yarray-like of shape (n_samples,) Binary target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. response_method{‘predict_proba’, ‘decision_function’, ‘auto’}, default=’auto’ Specifies whether to use predict_proba or decision_function as the target response. If set to ‘auto’, predict_proba is tried first and if it does not exist decision_function is tried next. namestr, default=None Name for labeling curve. If None, the name of the estimator is used. axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. pos_labelstr or int, default=None The class considered as the positive class when computing the precision and recall metrics. By default, estimators.classes_[1] is considered as the positive class. New in version 0.24. **kwargsdict Keyword arguments to be passed to matplotlib’s plot. Returns displayPrecisionRecallDisplay Object that stores computed values. See also precision_recall_curve Compute precision-recall pairs for different probability thresholds. PrecisionRecallDisplay Precision Recall visualization.
sklearn.modules.generated.sklearn.metrics.plot_precision_recall_curve#sklearn.metrics.plot_precision_recall_curve
sklearn.metrics.plot_roc_curve(estimator, X, y, *, sample_weight=None, drop_intermediate=True, response_method='auto', name=None, ax=None, pos_label=None, **kwargs) [source] Plot Receiver operating characteristic (ROC) curve. Extra keyword arguments will be passed to matplotlib’s plot. Read more in the User Guide. Parameters estimatorestimator instance Fitted classifier or a fitted Pipeline in which the last estimator is a classifier. X{array-like, sparse matrix} of shape (n_samples, n_features) Input values. yarray-like of shape (n_samples,) Target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. drop_intermediateboolean, default=True Whether to drop some suboptimal thresholds which would not appear on a plotted ROC curve. This is useful in order to create lighter ROC curves. response_method{‘predict_proba’, ‘decision_function’, ‘auto’} default=’auto’ Specifies whether to use predict_proba or decision_function as the target response. If set to ‘auto’, predict_proba is tried first and if it does not exist decision_function is tried next. namestr, default=None Name of ROC Curve for labeling. If None, use the name of the estimator. axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. pos_labelstr or int, default=None The class considered as the positive class when computing the roc auc metrics. By default, estimators.classes_[1] is considered as the positive class. New in version 0.24. Returns displayRocCurveDisplay Object that stores computed values. See also roc_curve Compute Receiver operating characteristic (ROC) curve. RocCurveDisplay ROC Curve visualization. roc_auc_score Compute the area under the ROC curve. Examples >>> import matplotlib.pyplot as plt >>> from sklearn import datasets, metrics, model_selection, svm >>> X, y = datasets.make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = model_selection.train_test_split( ... X, y, random_state=0) >>> clf = svm.SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> metrics.plot_roc_curve(clf, X_test, y_test) >>> plt.show()
sklearn.modules.generated.sklearn.metrics.plot_roc_curve#sklearn.metrics.plot_roc_curve
class sklearn.metrics.PrecisionRecallDisplay(precision, recall, *, average_precision=None, estimator_name=None, pos_label=None) [source] Precision Recall visualization. It is recommend to use plot_precision_recall_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters precisionndarray Precision values. recallndarray Recall values. average_precisionfloat, default=None Average precision. If None, the average precision is not shown. estimator_namestr, default=None Name of estimator. If None, then the estimator name is not shown. pos_labelstr or int, default=None The class considered as the positive class. If None, the class will not be shown in the legend. New in version 0.24. Attributes line_matplotlib Artist Precision recall curve. ax_matplotlib Axes Axes with precision recall curve. figure_matplotlib Figure Figure containing the curve. See also precision_recall_curve Compute precision-recall pairs for different probability thresholds. plot_precision_recall_curve Plot Precision Recall Curve for binary classifiers. Examples >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import (precision_recall_curve, ... PrecisionRecallDisplay) >>> from sklearn.model_selection import train_test_split >>> from sklearn.svm import SVC >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> clf = SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> predictions = clf.predict(X_test) >>> precision, recall, _ = precision_recall_curve(y_test, predictions) >>> disp = PrecisionRecallDisplay(precision=precision, recall=recall) >>> disp.plot() Methods plot([ax, name]) Plot visualization. plot(ax=None, *, name=None, **kwargs) [source] Plot visualization. Extra keyword arguments will be passed to matplotlib’s plot. Parameters axMatplotlib Axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of precision recall curve for labeling. If None, use the name of the estimator. **kwargsdict Keyword arguments to be passed to matplotlib’s plot. Returns displayPrecisionRecallDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay
sklearn.metrics.PrecisionRecallDisplay class sklearn.metrics.PrecisionRecallDisplay(precision, recall, *, average_precision=None, estimator_name=None, pos_label=None) [source] Precision Recall visualization. It is recommend to use plot_precision_recall_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters precisionndarray Precision values. recallndarray Recall values. average_precisionfloat, default=None Average precision. If None, the average precision is not shown. estimator_namestr, default=None Name of estimator. If None, then the estimator name is not shown. pos_labelstr or int, default=None The class considered as the positive class. If None, the class will not be shown in the legend. New in version 0.24. Attributes line_matplotlib Artist Precision recall curve. ax_matplotlib Axes Axes with precision recall curve. figure_matplotlib Figure Figure containing the curve. See also precision_recall_curve Compute precision-recall pairs for different probability thresholds. plot_precision_recall_curve Plot Precision Recall Curve for binary classifiers. Examples >>> from sklearn.datasets import make_classification >>> from sklearn.metrics import (precision_recall_curve, ... PrecisionRecallDisplay) >>> from sklearn.model_selection import train_test_split >>> from sklearn.svm import SVC >>> X, y = make_classification(random_state=0) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=0) >>> clf = SVC(random_state=0) >>> clf.fit(X_train, y_train) SVC(random_state=0) >>> predictions = clf.predict(X_test) >>> precision, recall, _ = precision_recall_curve(y_test, predictions) >>> disp = PrecisionRecallDisplay(precision=precision, recall=recall) >>> disp.plot() Methods plot([ax, name]) Plot visualization. plot(ax=None, *, name=None, **kwargs) [source] Plot visualization. Extra keyword arguments will be passed to matplotlib’s plot. Parameters axMatplotlib Axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of precision recall curve for labeling. If None, use the name of the estimator. **kwargsdict Keyword arguments to be passed to matplotlib’s plot. Returns displayPrecisionRecallDisplay Object that stores computed values. Examples using sklearn.metrics.PrecisionRecallDisplay Visualizations with Display Objects
sklearn.modules.generated.sklearn.metrics.precisionrecalldisplay
plot(ax=None, *, name=None, **kwargs) [source] Plot visualization. Extra keyword arguments will be passed to matplotlib’s plot. Parameters axMatplotlib Axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of precision recall curve for labeling. If None, use the name of the estimator. **kwargsdict Keyword arguments to be passed to matplotlib’s plot. Returns displayPrecisionRecallDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.precisionrecalldisplay#sklearn.metrics.PrecisionRecallDisplay.plot
sklearn.metrics.precision_recall_curve(y_true, probas_pred, *, pos_label=None, sample_weight=None) [source] Compute precision-recall pairs for different probability thresholds. Note: this implementation is restricted to the binary classification task. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. The last precision and recall values are 1. and 0. respectively and do not have a corresponding threshold. This ensures that the graph starts on the y axis. Read more in the User Guide. Parameters y_truendarray of shape (n_samples,) True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given. probas_predndarray of shape (n_samples,) Estimated probabilities or output of a decision function. pos_labelint or str, default=None The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns precisionndarray of shape (n_thresholds + 1,) Precision values such that element i is the precision of predictions with score >= thresholds[i] and the last element is 1. recallndarray of shape (n_thresholds + 1,) Decreasing recall values such that element i is the recall of predictions with score >= thresholds[i] and the last element is 0. thresholdsndarray of shape (n_thresholds,) Increasing thresholds on the decision function used to compute precision and recall. n_thresholds <= len(np.unique(probas_pred)). See also plot_precision_recall_curve Plot Precision Recall Curve for binary classifiers. PrecisionRecallDisplay Precision Recall visualization. average_precision_score Compute average precision from prediction scores. det_curve Compute error rates for different probability thresholds. roc_curve Compute Receiver operating characteristic (ROC) curve. Examples >>> import numpy as np >>> from sklearn.metrics import precision_recall_curve >>> y_true = np.array([0, 0, 1, 1]) >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> precision, recall, thresholds = precision_recall_curve( ... y_true, y_scores) >>> precision array([0.66666667, 0.5 , 1. , 1. ]) >>> recall array([1. , 0.5, 0.5, 0. ]) >>> thresholds array([0.35, 0.4 , 0.8 ])
sklearn.modules.generated.sklearn.metrics.precision_recall_curve#sklearn.metrics.precision_recall_curve
sklearn.metrics.precision_recall_fscore_support(y_true, y_pred, *, beta=1.0, labels=None, pos_label=1, average=None, warn_for='precision', 'recall', 'f-score', sample_weight=None, zero_division='warn') [source] Compute precision, recall, F-measure and support for each class. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. The F-beta score can be interpreted as a weighted harmonic mean of the precision and recall, where an F-beta score reaches its best value at 1 and worst score at 0. The F-beta score weights recall more than precision by a factor of beta. beta == 1.0 means recall and precision are equally important. The support is the number of occurrences of each class in y_true. If pos_label is None and in binary classification, this function returns the average precision, recall and F-measure if average is one of 'micro', 'macro', 'weighted' or 'samples'. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. betafloat, default=1.0 The strength of recall versus precision in the F-score. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘binary’, ‘micro’, ‘macro’, ‘samples’,’weighted’}, default=None If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). warn_fortuple or set, for internal use This determines which warnings will be made in the case that this function is being used to return only one of its metrics. sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division: recall: when there are no positive labels precision: when there are no positive predictions f-score: both If set to “warn”, this acts as 0, but warnings are also raised. Returns precisionfloat (if average is not None) or array of float, shape = [n_unique_labels] recallfloat (if average is not None) or array of float, , shape = [n_unique_labels] fbeta_scorefloat (if average is not None) or array of float, shape = [n_unique_labels] supportNone (if average is not None) or array of int, shape = [n_unique_labels] The number of occurrences of each label in y_true. Notes When true positive + false positive == 0, precision is undefined. When true positive + false negative == 0, recall is undefined. In such cases, by default the metric will be set to 0, as will f-score, and UndefinedMetricWarning will be raised. This behavior can be modified with zero_division. References 1 Wikipedia entry for the Precision and recall. 2 Wikipedia entry for the F1-score. 3 Discriminative Methods for Multi-labeled Classification Advances in Knowledge Discovery and Data Mining (2004), pp. 22-30 by Shantanu Godbole, Sunita Sarawagi. Examples >>> import numpy as np >>> from sklearn.metrics import precision_recall_fscore_support >>> y_true = np.array(['cat', 'dog', 'pig', 'cat', 'dog', 'pig']) >>> y_pred = np.array(['cat', 'pig', 'dog', 'cat', 'cat', 'dog']) >>> precision_recall_fscore_support(y_true, y_pred, average='macro') (0.22..., 0.33..., 0.26..., None) >>> precision_recall_fscore_support(y_true, y_pred, average='micro') (0.33..., 0.33..., 0.33..., None) >>> precision_recall_fscore_support(y_true, y_pred, average='weighted') (0.22..., 0.33..., 0.26..., None) It is possible to compute per-label precisions, recalls, F1-scores and supports instead of averaging: >>> precision_recall_fscore_support(y_true, y_pred, average=None, ... labels=['pig', 'dog', 'cat']) (array([0. , 0. , 0.66...]), array([0., 0., 1.]), array([0. , 0. , 0.8]), array([2, 2, 2]))
sklearn.modules.generated.sklearn.metrics.precision_recall_fscore_support#sklearn.metrics.precision_recall_fscore_support
sklearn.metrics.precision_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the precision. The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The best value is 1 and the worst value is 0. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} default=’binary’ This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Returns precisionfloat (if average is not None) or array of float of shape (n_unique_labels,) Precision of the positive class in binary classification or weighted average of the precision of each class for the multiclass task. See also precision_recall_fscore_support, multilabel_confusion_matrix Notes When true positive + false positive == 0, precision returns 0 and raises UndefinedMetricWarning. This behavior can be modified with zero_division. Examples >>> from sklearn.metrics import precision_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> precision_score(y_true, y_pred, average='macro') 0.22... >>> precision_score(y_true, y_pred, average='micro') 0.33... >>> precision_score(y_true, y_pred, average='weighted') 0.22... >>> precision_score(y_true, y_pred, average=None) array([0.66..., 0. , 0. ]) >>> y_pred = [0, 0, 0, 0, 0, 0] >>> precision_score(y_true, y_pred, average=None) array([0.33..., 0. , 0. ]) >>> precision_score(y_true, y_pred, average=None, zero_division=1) array([0.33..., 1. , 1. ])
sklearn.modules.generated.sklearn.metrics.precision_score#sklearn.metrics.precision_score
sklearn.metrics.r2_score(y_true, y_pred, *, sample_weight=None, multioutput='uniform_average') [source] R^2 (coefficient of determination) regression score function. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0. Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_outputs) Ground truth (correct) target values. y_predarray-like of shape (n_samples,) or (n_samples, n_outputs) Estimated target values. sample_weightarray-like of shape (n_samples,), default=None Sample weights. multioutput{‘raw_values’, ‘uniform_average’, ‘variance_weighted’}, array-like of shape (n_outputs,) or None, default=’uniform_average’ Defines aggregating of multiple output scores. Array-like value defines weights used to average scores. Default is “uniform_average”. ‘raw_values’ : Returns a full set of scores in case of multioutput input. ‘uniform_average’ : Scores of all outputs are averaged with uniform weight. ‘variance_weighted’ : Scores of all outputs are averaged, weighted by the variances of each individual output. Changed in version 0.19: Default value of multioutput is ‘uniform_average’. Returns zfloat or ndarray of floats The R^2 score or ndarray of scores if ‘multioutput’ is ‘raw_values’. Notes This is not a symmetric function. Unlike most other scores, R^2 score may be negative (it need not actually be the square of a quantity R). This metric is not well-defined for single samples and will return a NaN value if n_samples is less than two. References 1 Wikipedia entry on the Coefficient of determination Examples >>> from sklearn.metrics import r2_score >>> y_true = [3, -0.5, 2, 7] >>> y_pred = [2.5, 0.0, 2, 8] >>> r2_score(y_true, y_pred) 0.948... >>> y_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y_pred = [[0, 2], [-1, 2], [8, -5]] >>> r2_score(y_true, y_pred, ... multioutput='variance_weighted') 0.938... >>> y_true = [1, 2, 3] >>> y_pred = [1, 2, 3] >>> r2_score(y_true, y_pred) 1.0 >>> y_true = [1, 2, 3] >>> y_pred = [2, 2, 2] >>> r2_score(y_true, y_pred) 0.0 >>> y_true = [1, 2, 3] >>> y_pred = [3, 2, 1] >>> r2_score(y_true, y_pred) -3.0
sklearn.modules.generated.sklearn.metrics.r2_score#sklearn.metrics.r2_score
sklearn.metrics.rand_score(labels_true, labels_pred) [source] Rand index. The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are assigned in the same or different clusters in the predicted and true clusterings. The raw RI score is: RI = (number of agreeing pairs) / (number of pairs) Read more in the User Guide. Parameters labels_truearray-like of shape (n_samples,), dtype=integral Ground truth class labels to be used as a reference. labels_predarray-like of shape (n_samples,), dtype=integral Cluster labels to evaluate. Returns RIfloat Similarity score between 0.0 and 1.0, inclusive, 1.0 stands for perfect match. See also adjusted_rand_score Adjusted Rand Score adjusted_mutual_info_score Adjusted Mutual Information References Examples Perfectly matching labelings have a score of 1 even >>> from sklearn.metrics.cluster import rand_score >>> rand_score([0, 0, 1, 1], [1, 1, 0, 0]) 1.0 Labelings that assign all classes members to the same clusters are complete but may not always be pure, hence penalized: >>> rand_score([0, 0, 1, 2], [0, 0, 1, 1]) 0.83...
sklearn.modules.generated.sklearn.metrics.rand_score#sklearn.metrics.rand_score
sklearn.metrics.recall_score(y_true, y_pred, *, labels=None, pos_label=1, average='binary', sample_weight=None, zero_division='warn') [source] Compute the recall. The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. The best value is 1 and the worst value is 0. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labelsarray-like, default=None The set of labels to include when average != 'binary', and their order if average is None. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Changed in version 0.17: Parameter labels improved for multiclass problem. pos_labelstr or int, default=1 The class to report if average='binary' and the data is binary. If the data are multiclass or multilabel, this will be ignored; setting labels=[pos_label] and average != 'binary' will report scores for that label only. average{‘micro’, ‘macro’, ‘samples’, ‘weighted’, ‘binary’} default=’binary’ This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: 'binary': Only report results for the class specified by pos_label. This is applicable only if targets (y_{true,pred}) are binary. 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from accuracy_score). sample_weightarray-like of shape (n_samples,), default=None Sample weights. zero_division“warn”, 0 or 1, default=”warn” Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Returns recallfloat (if average is not None) or array of float of shape (n_unique_labels,) Recall of the positive class in binary classification or weighted average of the recall of each class for the multiclass task. See also precision_recall_fscore_support, balanced_accuracy_score multilabel_confusion_matrix Notes When true positive + false negative == 0, recall returns 0 and raises UndefinedMetricWarning. This behavior can be modified with zero_division. Examples >>> from sklearn.metrics import recall_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> recall_score(y_true, y_pred, average='macro') 0.33... >>> recall_score(y_true, y_pred, average='micro') 0.33... >>> recall_score(y_true, y_pred, average='weighted') 0.33... >>> recall_score(y_true, y_pred, average=None) array([1., 0., 0.]) >>> y_true = [0, 0, 0, 0, 0, 0] >>> recall_score(y_true, y_pred, average=None) array([0.5, 0. , 0. ]) >>> recall_score(y_true, y_pred, average=None, zero_division=1) array([0.5, 1. , 1. ])
sklearn.modules.generated.sklearn.metrics.recall_score#sklearn.metrics.recall_score
class sklearn.metrics.RocCurveDisplay(*, fpr, tpr, roc_auc=None, estimator_name=None, pos_label=None) [source] ROC Curve visualization. It is recommend to use plot_roc_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters fprndarray False positive rate. tprndarray True positive rate. roc_aucfloat, default=None Area under ROC curve. If None, the roc_auc score is not shown. estimator_namestr, default=None Name of estimator. If None, the estimator name is not shown. pos_labelstr or int, default=None The class considered as the positive class when computing the roc auc metrics. By default, estimators.classes_[1] is considered as the positive class. New in version 0.24. Attributes line_matplotlib Artist ROC Curve. ax_matplotlib Axes Axes with ROC Curve. figure_matplotlib Figure Figure containing the curve. See also roc_curve Compute Receiver operating characteristic (ROC) curve. plot_roc_curve Plot Receiver operating characteristic (ROC) curve. roc_auc_score Compute the area under the ROC curve. Examples >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([0, 0, 1, 1]) >>> pred = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, tpr, thresholds = metrics.roc_curve(y, pred) >>> roc_auc = metrics.auc(fpr, tpr) >>> display = metrics.RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, estimator_name='example estimator') >>> display.plot() >>> plt.show() Methods plot([ax, name]) Plot visualization plot(ax=None, *, name=None, **kwargs) [source] Plot visualization Extra keyword arguments will be passed to matplotlib’s plot. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of ROC Curve for labeling. If None, use the name of the estimator. Returns displayRocCurveDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay
sklearn.metrics.RocCurveDisplay class sklearn.metrics.RocCurveDisplay(*, fpr, tpr, roc_auc=None, estimator_name=None, pos_label=None) [source] ROC Curve visualization. It is recommend to use plot_roc_curve to create a visualizer. All parameters are stored as attributes. Read more in the User Guide. Parameters fprndarray False positive rate. tprndarray True positive rate. roc_aucfloat, default=None Area under ROC curve. If None, the roc_auc score is not shown. estimator_namestr, default=None Name of estimator. If None, the estimator name is not shown. pos_labelstr or int, default=None The class considered as the positive class when computing the roc auc metrics. By default, estimators.classes_[1] is considered as the positive class. New in version 0.24. Attributes line_matplotlib Artist ROC Curve. ax_matplotlib Axes Axes with ROC Curve. figure_matplotlib Figure Figure containing the curve. See also roc_curve Compute Receiver operating characteristic (ROC) curve. plot_roc_curve Plot Receiver operating characteristic (ROC) curve. roc_auc_score Compute the area under the ROC curve. Examples >>> import matplotlib.pyplot as plt >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([0, 0, 1, 1]) >>> pred = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, tpr, thresholds = metrics.roc_curve(y, pred) >>> roc_auc = metrics.auc(fpr, tpr) >>> display = metrics.RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, estimator_name='example estimator') >>> display.plot() >>> plt.show() Methods plot([ax, name]) Plot visualization plot(ax=None, *, name=None, **kwargs) [source] Plot visualization Extra keyword arguments will be passed to matplotlib’s plot. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of ROC Curve for labeling. If None, use the name of the estimator. Returns displayRocCurveDisplay Object that stores computed values. Examples using sklearn.metrics.RocCurveDisplay Visualizations with Display Objects
sklearn.modules.generated.sklearn.metrics.roccurvedisplay
plot(ax=None, *, name=None, **kwargs) [source] Plot visualization Extra keyword arguments will be passed to matplotlib’s plot. Parameters axmatplotlib axes, default=None Axes object to plot on. If None, a new figure and axes is created. namestr, default=None Name of ROC Curve for labeling. If None, use the name of the estimator. Returns displayRocCurveDisplay Object that stores computed values.
sklearn.modules.generated.sklearn.metrics.roccurvedisplay#sklearn.metrics.RocCurveDisplay.plot
sklearn.metrics.roc_auc_score(y_true, y_score, *, average='macro', sample_weight=None, max_fpr=None, multi_class='raise', labels=None) [source] Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores. Note: this implementation can be used with binary, multiclass and multilabel classification, but some restrictions apply (see Parameters). Read more in the User Guide. Parameters y_truearray-like of shape (n_samples,) or (n_samples, n_classes) True labels or binary label indicators. The binary and multiclass cases expect labels with shape (n_samples,) while the multilabel case expects binary label indicators with shape (n_samples, n_classes). y_scorearray-like of shape (n_samples,) or (n_samples, n_classes) Target scores. In the binary case, it corresponds to an array of shape (n_samples,). Both probability estimates and non-thresholded decision values can be provided. The probability estimates correspond to the probability of the class with the greater label, i.e. estimator.classes_[1] and thus estimator.predict_proba(X, y)[:, 1]. The decision values corresponds to the output of estimator.decision_function(X, y). See more information in the User guide; In the multiclass case, it corresponds to an array of shape (n_samples, n_classes) of probability estimates provided by the predict_proba method. The probability estimates must sum to 1 across the possible classes. In addition, the order of the class scores must correspond to the order of labels, if provided, or else to the numerical or lexicographical order of the labels in y_true. See more information in the User guide; In the multilabel case, it corresponds to an array of shape (n_samples, n_classes). Probability estimates are provided by the predict_proba method and the non-thresholded decision values by the decision_function method. The probability estimates correspond to the probability of the class with the greater label for each output of the classifier. See more information in the User guide. average{‘micro’, ‘macro’, ‘samples’, ‘weighted’} or None, default=’macro’ If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data: Note: multiclass ROC AUC currently only handles the ‘macro’ and ‘weighted’ averages. 'micro': Calculate metrics globally by considering each element of the label indicator matrix as a label. 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 'weighted': Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). 'samples': Calculate metrics for each instance, and find their average. Will be ignored when y_true is binary. sample_weightarray-like of shape (n_samples,), default=None Sample weights. max_fprfloat > 0 and <= 1, default=None If not None, the standardized partial AUC [2] over the range [0, max_fpr] is returned. For the multiclass case, max_fpr, should be either equal to None or 1.0 as AUC ROC partial computation currently is not supported for multiclass. multi_class{‘raise’, ‘ovr’, ‘ovo’}, default=’raise’ Only used for multiclass targets. Determines the type of configuration to use. The default value raises an error, so either 'ovr' or 'ovo' must be passed explicitly. 'ovr': Stands for One-vs-rest. Computes the AUC of each class against the rest [3] [4]. This treats the multiclass case in the same way as the multilabel case. Sensitive to class imbalance even when average == 'macro', because class imbalance affects the composition of each of the ‘rest’ groupings. 'ovo': Stands for One-vs-one. Computes the average AUC of all possible pairwise combinations of classes [5]. Insensitive to class imbalance when average == 'macro'. labelsarray-like of shape (n_classes,), default=None Only used for multiclass targets. List of labels that index the classes in y_score. If None, the numerical or lexicographical order of the labels in y_true is used. Returns aucfloat See also average_precision_score Area under the precision-recall curve. roc_curve Compute Receiver operating characteristic (ROC) curve. plot_roc_curve Plot Receiver operating characteristic (ROC) curve. References 1 Wikipedia entry for the Receiver operating characteristic 2 Analyzing a portion of the ROC curve. McClish, 1989 3 Provost, F., Domingos, P. (2000). Well-trained PETs: Improving probability estimation trees (Section 6.2), CeDER Working Paper #IS-00-04, Stern School of Business, New York University. 4 Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861-874. 5 Hand, D.J., Till, R.J. (2001). A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems. Machine Learning, 45(2), 171-186. Examples Binary case: >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.metrics import roc_auc_score >>> X, y = load_breast_cancer(return_X_y=True) >>> clf = LogisticRegression(solver="liblinear", random_state=0).fit(X, y) >>> roc_auc_score(y, clf.predict_proba(X)[:, 1]) 0.99... >>> roc_auc_score(y, clf.decision_function(X)) 0.99... Multiclass case: >>> from sklearn.datasets import load_iris >>> X, y = load_iris(return_X_y=True) >>> clf = LogisticRegression(solver="liblinear").fit(X, y) >>> roc_auc_score(y, clf.predict_proba(X), multi_class='ovr') 0.99... Multilabel case: >>> from sklearn.datasets import make_multilabel_classification >>> from sklearn.multioutput import MultiOutputClassifier >>> X, y = make_multilabel_classification(random_state=0) >>> clf = MultiOutputClassifier(clf).fit(X, y) >>> # get a list of n_output containing probability arrays of shape >>> # (n_samples, n_classes) >>> y_pred = clf.predict_proba(X) >>> # extract the positive columns for each output >>> y_pred = np.transpose([pred[:, 1] for pred in y_pred]) >>> roc_auc_score(y, y_pred, average=None) array([0.82..., 0.86..., 0.94..., 0.85... , 0.94...]) >>> from sklearn.linear_model import RidgeClassifierCV >>> clf = RidgeClassifierCV().fit(X, y) >>> roc_auc_score(y, clf.decision_function(X), average=None) array([0.81..., 0.84... , 0.93..., 0.87..., 0.94...])
sklearn.modules.generated.sklearn.metrics.roc_auc_score#sklearn.metrics.roc_auc_score
sklearn.metrics.roc_curve(y_true, y_score, *, pos_label=None, sample_weight=None, drop_intermediate=True) [source] Compute Receiver operating characteristic (ROC). Note: this implementation is restricted to the binary classification task. Read more in the User Guide. Parameters y_truendarray of shape (n_samples,) True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given. y_scorendarray of shape (n_samples,) Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). pos_labelint or str, default=None The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised. sample_weightarray-like of shape (n_samples,), default=None Sample weights. drop_intermediatebool, default=True Whether to drop some suboptimal thresholds which would not appear on a plotted ROC curve. This is useful in order to create lighter ROC curves. New in version 0.17: parameter drop_intermediate. Returns fprndarray of shape (>2,) Increasing false positive rates such that element i is the false positive rate of predictions with score >= thresholds[i]. tprndarray of shape (>2,) Increasing true positive rates such that element i is the true positive rate of predictions with score >= thresholds[i]. thresholdsndarray of shape = (n_thresholds,) Decreasing thresholds on the decision function used to compute fpr and tpr. thresholds[0] represents no instances being predicted and is arbitrarily set to max(y_score) + 1. See also plot_roc_curve Plot Receiver operating characteristic (ROC) curve. RocCurveDisplay ROC Curve visualization. det_curve Compute error rates for different probability thresholds. roc_auc_score Compute the area under the ROC curve. Notes Since the thresholds are sorted from low to high values, they are reversed upon returning them to ensure they correspond to both fpr and tpr, which are sorted in reversed order during their calculation. References 1 Wikipedia entry for the Receiver operating characteristic 2 Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition Letters, 2006, 27(8):861-874. Examples >>> import numpy as np >>> from sklearn import metrics >>> y = np.array([1, 1, 2, 2]) >>> scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2) >>> fpr array([0. , 0. , 0.5, 0.5, 1. ]) >>> tpr array([0. , 0.5, 0.5, 1. , 1. ]) >>> thresholds array([1.8 , 0.8 , 0.4 , 0.35, 0.1 ])
sklearn.modules.generated.sklearn.metrics.roc_curve#sklearn.metrics.roc_curve
sklearn.metrics.silhouette_samples(X, labels, *, metric='euclidean', **kwds) [source] Compute the Silhouette Coefficient for each sample. The Silhouette Coefficient is a measure of how well samples are clustered with samples that are similar to themselves. Clustering models with a high Silhouette Coefficient are said to be dense, where samples in the same cluster are similar to each other, and well separated, where samples in different clusters are not very similar to each other. The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample. The Silhouette Coefficient for a sample is (b - a) / max(a, b). Note that Silhouette Coefficient is only defined if number of labels is 2 <= n_labels <= n_samples - 1. This function returns the Silhouette Coefficient for each sample. The best value is 1 and the worst value is -1. Values near 0 indicate overlapping clusters. Read more in the User Guide. Parameters Xarray-like of shape (n_samples_a, n_samples_a) if metric == “precomputed” or (n_samples_a, n_features) otherwise An array of pairwise distances between samples, or a feature array. labelsarray-like of shape (n_samples,) Label values for each sample. metricstr or callable, default=’euclidean’ The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by sklearn.metrics.pairwise.pairwise_distances. If X is the distance array itself, use “precomputed” as the metric. Precomputed distance matrices must have 0 along the diagonal. `**kwds`optional keyword parameters Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage examples. Returns silhouettearray-like of shape (n_samples,) Silhouette Coefficients for each sample. References 1 Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and Applied Mathematics 20: 53-65. 2 Wikipedia entry on the Silhouette Coefficient
sklearn.modules.generated.sklearn.metrics.silhouette_samples#sklearn.metrics.silhouette_samples
sklearn.metrics.silhouette_score(X, labels, *, metric='euclidean', sample_size=None, random_state=None, **kwds) [source] Compute the mean Silhouette Coefficient of all samples. The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample. The Silhouette Coefficient for a sample is (b - a) / max(a, b). To clarify, b is the distance between a sample and the nearest cluster that the sample is not a part of. Note that Silhouette Coefficient is only defined if number of labels is 2 <= n_labels <= n_samples - 1. This function returns the mean Silhouette Coefficient over all samples. To obtain the values for each sample, use silhouette_samples. The best value is 1 and the worst value is -1. Values near 0 indicate overlapping clusters. Negative values generally indicate that a sample has been assigned to the wrong cluster, as a different cluster is more similar. Read more in the User Guide. Parameters Xarray-like of shape (n_samples_a, n_samples_a) if metric == “precomputed” or (n_samples_a, n_features) otherwise An array of pairwise distances between samples, or a feature array. labelsarray-like of shape (n_samples,) Predicted labels for each sample. metricstr or callable, default=’euclidean’ The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by metrics.pairwise.pairwise_distances. If X is the distance array itself, use metric="precomputed". sample_sizeint, default=None The size of the sample to use when computing the Silhouette Coefficient on a random subset of the data. If sample_size is None, no sampling is used. random_stateint, RandomState instance or None, default=None Determines random number generation for selecting a subset of samples. Used when sample_size is not None. Pass an int for reproducible results across multiple function calls. See Glossary. **kwdsoptional keyword parameters Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage examples. Returns silhouettefloat Mean Silhouette Coefficient for all samples. References 1 Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and Applied Mathematics 20: 53-65. 2 Wikipedia entry on the Silhouette Coefficient
sklearn.modules.generated.sklearn.metrics.silhouette_score#sklearn.metrics.silhouette_score
sklearn.metrics.top_k_accuracy_score(y_true, y_score, *, k=2, normalize=True, sample_weight=None, labels=None) [source] Top-k Accuracy classification score. This metric computes the number of times where the correct label is among the top k labels predicted (ranked by predicted scores). Note that the multilabel case isn’t covered here. Read more in the User Guide Parameters y_truearray-like of shape (n_samples,) True labels. y_scorearray-like of shape (n_samples,) or (n_samples, n_classes) Target scores. These can be either probability estimates or non-thresholded decision values (as returned by decision_function on some classifiers). The binary case expects scores with shape (n_samples,) while the multiclass case expects scores with shape (n_samples, n_classes). In the nulticlass case, the order of the class scores must correspond to the order of labels, if provided, or else to the numerical or lexicographical order of the labels in y_true. kint, default=2 Number of most likely outcomes considered to find the correct label. normalizebool, default=True If True, return the fraction of correctly classified samples. Otherwise, return the number of correctly classified samples. sample_weightarray-like of shape (n_samples,), default=None Sample weights. If None, all samples are given the same weight. labelsarray-like of shape (n_classes,), default=None Multiclass only. List of labels that index the classes in y_score. If None, the numerical or lexicographical order of the labels in y_true is used. Returns scorefloat The top-k accuracy score. The best performance is 1 with normalize == True and the number of samples with normalize == False. See also accuracy_score Notes In cases where two or more labels are assigned equal predicted scores, the labels with the highest indices will be chosen first. This might impact the result if the correct label falls after the threshold because of that. Examples >>> import numpy as np >>> from sklearn.metrics import top_k_accuracy_score >>> y_true = np.array([0, 1, 2, 2]) >>> y_score = np.array([[0.5, 0.2, 0.2], # 0 is in top 2 ... [0.3, 0.4, 0.2], # 1 is in top 2 ... [0.2, 0.4, 0.3], # 2 is in top 2 ... [0.7, 0.2, 0.1]]) # 2 isn't in top 2 >>> top_k_accuracy_score(y_true, y_score, k=2) 0.75 >>> # Not normalizing gives the number of "correctly" classified samples >>> top_k_accuracy_score(y_true, y_score, k=2, normalize=False) 3
sklearn.modules.generated.sklearn.metrics.top_k_accuracy_score#sklearn.metrics.top_k_accuracy_score
sklearn.metrics.v_measure_score(labels_true, labels_pred, *, beta=1.0) [source] V-measure cluster labeling given a ground truth. This score is identical to normalized_mutual_info_score with the 'arithmetic' option for averaging. The V-measure is the harmonic mean between homogeneity and completeness: v = (1 + beta) * homogeneity * completeness / (beta * homogeneity + completeness) This metric is independent of the absolute values of the labels: a permutation of the class or cluster label values won’t change the score value in any way. This metric is furthermore symmetric: switching label_true with label_pred will return the same score value. This can be useful to measure the agreement of two independent label assignments strategies on the same dataset when the real ground truth is not known. Read more in the User Guide. Parameters labels_trueint array, shape = [n_samples] ground truth class labels to be used as a reference labels_predarray-like of shape (n_samples,) cluster labels to evaluate betafloat, default=1.0 Ratio of weight attributed to homogeneity vs completeness. If beta is greater than 1, completeness is weighted more strongly in the calculation. If beta is less than 1, homogeneity is weighted more strongly. Returns v_measurefloat score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling See also homogeneity_score completeness_score normalized_mutual_info_score References 1 Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A conditional entropy-based external cluster evaluation measure Examples Perfect labelings are both homogeneous and complete, hence have score 1.0: >>> from sklearn.metrics.cluster import v_measure_score >>> v_measure_score([0, 0, 1, 1], [0, 0, 1, 1]) 1.0 >>> v_measure_score([0, 0, 1, 1], [1, 1, 0, 0]) 1.0 Labelings that assign all classes members to the same clusters are complete be not homogeneous, hence penalized: >>> print("%.6f" % v_measure_score([0, 0, 1, 2], [0, 0, 1, 1])) 0.8... >>> print("%.6f" % v_measure_score([0, 1, 2, 3], [0, 0, 1, 1])) 0.66... Labelings that have pure clusters with members coming from the same classes are homogeneous but un-necessary splits harms completeness and thus penalize V-measure as well: >>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 0, 1, 2])) 0.8... >>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 1, 2, 3])) 0.66... If classes members are completely split across different clusters, the assignment is totally incomplete, hence the V-Measure is null: >>> print("%.6f" % v_measure_score([0, 0, 0, 0], [0, 1, 2, 3])) 0.0... Clusters that include samples from totally different classes totally destroy the homogeneity of the labeling, hence: >>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 0, 0, 0])) 0.0...
sklearn.modules.generated.sklearn.metrics.v_measure_score#sklearn.metrics.v_measure_score
sklearn.metrics.zero_one_loss(y_true, y_pred, *, normalize=True, sample_weight=None) [source] Zero-one classification loss. If normalize is True, return the fraction of misclassifications (float), else it returns the number of misclassifications (int). The best performance is 0. Read more in the User Guide. Parameters y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. normalizebool, default=True If False, return the number of misclassifications. Otherwise, return the fraction of misclassifications. sample_weightarray-like of shape (n_samples,), default=None Sample weights. Returns lossfloat or int, If normalize == True, return the fraction of misclassifications (float), else it returns the number of misclassifications (int). See also accuracy_score, hamming_loss, jaccard_score Notes In multilabel classification, the zero_one_loss function corresponds to the subset zero-one loss: for each sample, the entire set of labels must be correctly predicted, otherwise the loss for that sample is equal to one. Examples >>> from sklearn.metrics import zero_one_loss >>> y_pred = [1, 2, 3, 4] >>> y_true = [2, 2, 3, 4] >>> zero_one_loss(y_true, y_pred) 0.25 >>> zero_one_loss(y_true, y_pred, normalize=False) 1 In the multilabel case with binary label indicators: >>> import numpy as np >>> zero_one_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2))) 0.5
sklearn.modules.generated.sklearn.metrics.zero_one_loss#sklearn.metrics.zero_one_loss
class sklearn.mixture.BayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=None, covariance_prior=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [source] Variational Bayesian estimation of a Gaussian mixture. This class allows to infer an approximate posterior distribution over the parameters of a Gaussian mixture distribution. The effective number of components can be inferred from the data. This class implements two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components (called the Stick-breaking representation). The number of components actually used almost always depends on the data. New in version 0.18. Read more in the User Guide. Parameters n_componentsint, default=1 The number of mixture components. Depending on the data and the value of the weight_concentration_prior the model can decide to not use all the components by setting some component weights_ to values very close to zero. The number of effective components is therefore smaller than n_components. covariance_type{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’ String describing the type of covariance parameters to use. Must be one of: 'full' (each component has its own general covariance matrix), 'tied' (all components share the same general covariance matrix), 'diag' (each component has its own diagonal covariance matrix), 'spherical' (each component has its own single variance). tolfloat, default=1e-3 The convergence threshold. EM iterations will stop when the lower bound average gain on the likelihood (of the training data with respect to the model) is below this threshold. reg_covarfloat, default=1e-6 Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive. max_iterint, default=100 The number of EM iterations to perform. n_initint, default=1 The number of initializations to perform. The result with the highest lower bound value on the likelihood is kept. init_params{‘kmeans’, ‘random’}, default=’kmeans’ The method used to initialize the weights, the means and the covariances. Must be one of: 'kmeans' : responsibilities are initialized using kmeans. 'random' : responsibilities are initialized randomly. weight_concentration_prior_typestr, default=’dirichlet_process’ String describing the type of the weight concentration prior. Must be one of: 'dirichlet_process' (using the Stick-breaking representation), 'dirichlet_distribution' (can favor more uniform weights). weight_concentration_priorfloat | None, default=None. The dirichlet concentration of each component on the weight distribution (Dirichlet). This is commonly called gamma in the literature. The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the mixture weights simplex. The value of the parameter must be greater than 0. If it is None, it’s set to 1. / n_components. mean_precision_priorfloat | None, default=None. The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. The value of the parameter must be greater than 0. If it is None, it is set to 1. mean_priorarray-like, shape (n_features,), default=None. The prior on the mean distribution (Gaussian). If it is None, it is set to the mean of X. degrees_of_freedom_priorfloat | None, default=None. The prior of the number of degrees of freedom on the covariance distributions (Wishart). If it is None, it’s set to n_features. covariance_priorfloat or array-like, default=None. The prior on the covariance distribution (Wishart). If it is None, the emiprical covariance prior is initialized using the covariance of X. The shape depends on covariance_type: (n_features, n_features) if 'full', (n_features, n_features) if 'tied', (n_features) if 'diag', float if 'spherical' random_stateint, RandomState instance or None, default=None Controls the random seed given to the method chosen to initialize the parameters (see init_params). In addition, it controls the generation of random samples from the fitted distribution (see the method sample). Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False If ‘warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. See the Glossary. verboseint, default=0 Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step. verbose_intervalint, default=10 Number of iteration done before the next print. Attributes weights_array-like of shape (n_components,) The weights of each mixture components. means_array-like of shape (n_components, n_features) The mean of each mixture component. covariances_array-like The covariance of each mixture component. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_array-like The precision matrices for each component in the mixture. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_cholesky_array-like The cholesky decomposition of the precision matrices of each mixture component. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' converged_bool True when convergence was reached in fit(), False otherwise. n_iter_int Number of step used by the best fit of inference to reach the convergence. lower_bound_float Lower bound value on the likelihood (of the training data with respect to the model) of the best fit of inference. weight_concentration_prior_tuple or float The dirichlet concentration of each component on the weight distribution (Dirichlet). The type depends on weight_concentration_prior_type: (float, float) if 'dirichlet_process' (Beta parameters), float if 'dirichlet_distribution' (Dirichlet parameters). The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the simplex. weight_concentration_array-like of shape (n_components,) The dirichlet concentration of each component on the weight distribution (Dirichlet). mean_precision_prior_float The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. If mean_precision_prior is set to None, mean_precision_prior_ is set to 1. mean_precision_array-like of shape (n_components,) The precision of each components on the mean distribution (Gaussian). mean_prior_array-like of shape (n_features,) The prior on the mean distribution (Gaussian). degrees_of_freedom_prior_float The prior of the number of degrees of freedom on the covariance distributions (Wishart). degrees_of_freedom_array-like of shape (n_components,) The number of degrees of freedom of each components in the model. covariance_prior_float or array-like The prior on the covariance distribution (Wishart). The shape depends on covariance_type: (n_features, n_features) if 'full', (n_features, n_features) if 'tied', (n_features) if 'diag', float if 'spherical' See also GaussianMixture Finite Gaussian mixture fit with EM. References 1 Bishop, Christopher M. (2006). “Pattern recognition and machine learning”. Vol. 4 No. 4. New York: Springer. 2 Hagai Attias. (2000). “A Variational Bayesian Framework for Graphical Models”. In Advances in Neural Information Processing Systems 12. 3 Blei, David M. and Michael I. Jordan. (2006). “Variational inference for Dirichlet process mixtures”. Bayesian analysis 1.1 Examples >>> import numpy as np >>> from sklearn.mixture import BayesianGaussianMixture >>> X = np.array([[1, 2], [1, 4], [1, 0], [4, 2], [12, 4], [10, 7]]) >>> bgm = BayesianGaussianMixture(n_components=2, random_state=42).fit(X) >>> bgm.means_ array([[2.49... , 2.29...], [8.45..., 4.52... ]]) >>> bgm.predict([[0, 0], [9, 3]]) array([0, 1]) Methods fit(X[, y]) Estimate model parameters with the EM algorithm. fit_predict(X[, y]) Estimate model parameters using X and predict the labels for X. get_params([deep]) Get parameters for this estimator. predict(X) Predict the labels for the data samples in X using trained model. predict_proba(X) Predict posterior probability of each component given the data. sample([n_samples]) Generate random samples from the fitted Gaussian distribution. score(X[, y]) Compute the per-sample average log-likelihood of the given data X. score_samples(X) Compute the weighted log probabilities for each sample. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample. sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X. score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture
sklearn.mixture.BayesianGaussianMixture class sklearn.mixture.BayesianGaussianMixture(*, n_components=1, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weight_concentration_prior_type='dirichlet_process', weight_concentration_prior=None, mean_precision_prior=None, mean_prior=None, degrees_of_freedom_prior=None, covariance_prior=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [source] Variational Bayesian estimation of a Gaussian mixture. This class allows to infer an approximate posterior distribution over the parameters of a Gaussian mixture distribution. The effective number of components can be inferred from the data. This class implements two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components (called the Stick-breaking representation). The number of components actually used almost always depends on the data. New in version 0.18. Read more in the User Guide. Parameters n_componentsint, default=1 The number of mixture components. Depending on the data and the value of the weight_concentration_prior the model can decide to not use all the components by setting some component weights_ to values very close to zero. The number of effective components is therefore smaller than n_components. covariance_type{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’ String describing the type of covariance parameters to use. Must be one of: 'full' (each component has its own general covariance matrix), 'tied' (all components share the same general covariance matrix), 'diag' (each component has its own diagonal covariance matrix), 'spherical' (each component has its own single variance). tolfloat, default=1e-3 The convergence threshold. EM iterations will stop when the lower bound average gain on the likelihood (of the training data with respect to the model) is below this threshold. reg_covarfloat, default=1e-6 Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive. max_iterint, default=100 The number of EM iterations to perform. n_initint, default=1 The number of initializations to perform. The result with the highest lower bound value on the likelihood is kept. init_params{‘kmeans’, ‘random’}, default=’kmeans’ The method used to initialize the weights, the means and the covariances. Must be one of: 'kmeans' : responsibilities are initialized using kmeans. 'random' : responsibilities are initialized randomly. weight_concentration_prior_typestr, default=’dirichlet_process’ String describing the type of the weight concentration prior. Must be one of: 'dirichlet_process' (using the Stick-breaking representation), 'dirichlet_distribution' (can favor more uniform weights). weight_concentration_priorfloat | None, default=None. The dirichlet concentration of each component on the weight distribution (Dirichlet). This is commonly called gamma in the literature. The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the mixture weights simplex. The value of the parameter must be greater than 0. If it is None, it’s set to 1. / n_components. mean_precision_priorfloat | None, default=None. The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. The value of the parameter must be greater than 0. If it is None, it is set to 1. mean_priorarray-like, shape (n_features,), default=None. The prior on the mean distribution (Gaussian). If it is None, it is set to the mean of X. degrees_of_freedom_priorfloat | None, default=None. The prior of the number of degrees of freedom on the covariance distributions (Wishart). If it is None, it’s set to n_features. covariance_priorfloat or array-like, default=None. The prior on the covariance distribution (Wishart). If it is None, the emiprical covariance prior is initialized using the covariance of X. The shape depends on covariance_type: (n_features, n_features) if 'full', (n_features, n_features) if 'tied', (n_features) if 'diag', float if 'spherical' random_stateint, RandomState instance or None, default=None Controls the random seed given to the method chosen to initialize the parameters (see init_params). In addition, it controls the generation of random samples from the fitted distribution (see the method sample). Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False If ‘warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. See the Glossary. verboseint, default=0 Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step. verbose_intervalint, default=10 Number of iteration done before the next print. Attributes weights_array-like of shape (n_components,) The weights of each mixture components. means_array-like of shape (n_components, n_features) The mean of each mixture component. covariances_array-like The covariance of each mixture component. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_array-like The precision matrices for each component in the mixture. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_cholesky_array-like The cholesky decomposition of the precision matrices of each mixture component. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' converged_bool True when convergence was reached in fit(), False otherwise. n_iter_int Number of step used by the best fit of inference to reach the convergence. lower_bound_float Lower bound value on the likelihood (of the training data with respect to the model) of the best fit of inference. weight_concentration_prior_tuple or float The dirichlet concentration of each component on the weight distribution (Dirichlet). The type depends on weight_concentration_prior_type: (float, float) if 'dirichlet_process' (Beta parameters), float if 'dirichlet_distribution' (Dirichlet parameters). The higher concentration puts more mass in the center and will lead to more components being active, while a lower concentration parameter will lead to more mass at the edge of the simplex. weight_concentration_array-like of shape (n_components,) The dirichlet concentration of each component on the weight distribution (Dirichlet). mean_precision_prior_float The precision prior on the mean distribution (Gaussian). Controls the extent of where means can be placed. Larger values concentrate the cluster means around mean_prior. If mean_precision_prior is set to None, mean_precision_prior_ is set to 1. mean_precision_array-like of shape (n_components,) The precision of each components on the mean distribution (Gaussian). mean_prior_array-like of shape (n_features,) The prior on the mean distribution (Gaussian). degrees_of_freedom_prior_float The prior of the number of degrees of freedom on the covariance distributions (Wishart). degrees_of_freedom_array-like of shape (n_components,) The number of degrees of freedom of each components in the model. covariance_prior_float or array-like The prior on the covariance distribution (Wishart). The shape depends on covariance_type: (n_features, n_features) if 'full', (n_features, n_features) if 'tied', (n_features) if 'diag', float if 'spherical' See also GaussianMixture Finite Gaussian mixture fit with EM. References 1 Bishop, Christopher M. (2006). “Pattern recognition and machine learning”. Vol. 4 No. 4. New York: Springer. 2 Hagai Attias. (2000). “A Variational Bayesian Framework for Graphical Models”. In Advances in Neural Information Processing Systems 12. 3 Blei, David M. and Michael I. Jordan. (2006). “Variational inference for Dirichlet process mixtures”. Bayesian analysis 1.1 Examples >>> import numpy as np >>> from sklearn.mixture import BayesianGaussianMixture >>> X = np.array([[1, 2], [1, 4], [1, 0], [4, 2], [12, 4], [10, 7]]) >>> bgm = BayesianGaussianMixture(n_components=2, random_state=42).fit(X) >>> bgm.means_ array([[2.49... , 2.29...], [8.45..., 4.52... ]]) >>> bgm.predict([[0, 0], [9, 3]]) array([0, 1]) Methods fit(X[, y]) Estimate model parameters with the EM algorithm. fit_predict(X[, y]) Estimate model parameters using X and predict the labels for X. get_params([deep]) Get parameters for this estimator. predict(X) Predict the labels for the data samples in X using trained model. predict_proba(X) Predict posterior probability of each component given the data. sample([n_samples]) Generate random samples from the fitted Gaussian distribution. score(X[, y]) Compute the per-sample average log-likelihood of the given data X. score_samples(X) Compute the weighted log probabilities for each sample. set_params(**params) Set the parameters of this estimator. fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample. sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X. score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.mixture.BayesianGaussianMixture Gaussian Mixture Model Ellipsoids Gaussian Mixture Model Sine Curve Concentration Prior Type Analysis of Variation Bayesian Gaussian Mixture
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture
fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.fit
fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.fit_predict
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.get_params
predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.predict
predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.predict_proba
sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.sample
score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.score
score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.score_samples
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.mixture.bayesiangaussianmixture#sklearn.mixture.BayesianGaussianMixture.set_params
class sklearn.mixture.GaussianMixture(n_components=1, *, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weights_init=None, means_init=None, precisions_init=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [source] Gaussian Mixture. Representation of a Gaussian mixture model probability distribution. This class allows to estimate the parameters of a Gaussian mixture distribution. Read more in the User Guide. New in version 0.18. Parameters n_componentsint, default=1 The number of mixture components. covariance_type{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’ String describing the type of covariance parameters to use. Must be one of: ‘full’ each component has its own general covariance matrix ‘tied’ all components share the same general covariance matrix ‘diag’ each component has its own diagonal covariance matrix ‘spherical’ each component has its own single variance tolfloat, default=1e-3 The convergence threshold. EM iterations will stop when the lower bound average gain is below this threshold. reg_covarfloat, default=1e-6 Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive. max_iterint, default=100 The number of EM iterations to perform. n_initint, default=1 The number of initializations to perform. The best results are kept. init_params{‘kmeans’, ‘random’}, default=’kmeans’ The method used to initialize the weights, the means and the precisions. Must be one of: 'kmeans' : responsibilities are initialized using kmeans. 'random' : responsibilities are initialized randomly. weights_initarray-like of shape (n_components, ), default=None The user-provided initial weights. If it is None, weights are initialized using the init_params method. means_initarray-like of shape (n_components, n_features), default=None The user-provided initial means, If it is None, means are initialized using the init_params method. precisions_initarray-like, default=None The user-provided initial precisions (inverse of the covariance matrices). If it is None, precisions are initialized using the ‘init_params’ method. The shape depends on ‘covariance_type’: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' random_stateint, RandomState instance or None, default=None Controls the random seed given to the method chosen to initialize the parameters (see init_params). In addition, it controls the generation of random samples from the fitted distribution (see the method sample). Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False If ‘warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. In that case, ‘n_init’ is ignored and only a single initialization occurs upon the first call. See the Glossary. verboseint, default=0 Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step. verbose_intervalint, default=10 Number of iteration done before the next print. Attributes weights_array-like of shape (n_components,) The weights of each mixture components. means_array-like of shape (n_components, n_features) The mean of each mixture component. covariances_array-like The covariance of each mixture component. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_array-like The precision matrices for each component in the mixture. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_cholesky_array-like The cholesky decomposition of the precision matrices of each mixture component. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' converged_bool True when convergence was reached in fit(), False otherwise. n_iter_int Number of step used by the best fit of EM to reach the convergence. lower_bound_float Lower bound value on the log-likelihood (of the training data with respect to the model) of the best fit of EM. See also BayesianGaussianMixture Gaussian mixture model fit with a variational inference. Examples >>> import numpy as np >>> from sklearn.mixture import GaussianMixture >>> X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]) >>> gm = GaussianMixture(n_components=2, random_state=0).fit(X) >>> gm.means_ array([[10., 2.], [ 1., 2.]]) >>> gm.predict([[0, 0], [12, 3]]) array([1, 0]) Methods aic(X) Akaike information criterion for the current model on the input X. bic(X) Bayesian information criterion for the current model on the input X. fit(X[, y]) Estimate model parameters with the EM algorithm. fit_predict(X[, y]) Estimate model parameters using X and predict the labels for X. get_params([deep]) Get parameters for this estimator. predict(X) Predict the labels for the data samples in X using trained model. predict_proba(X) Predict posterior probability of each component given the data. sample([n_samples]) Generate random samples from the fitted Gaussian distribution. score(X[, y]) Compute the per-sample average log-likelihood of the given data X. score_samples(X) Compute the weighted log probabilities for each sample. set_params(**params) Set the parameters of this estimator. aic(X) [source] Akaike information criterion for the current model on the input X. Parameters Xarray of shape (n_samples, n_dimensions) Returns aicfloat The lower the better. bic(X) [source] Bayesian information criterion for the current model on the input X. Parameters Xarray of shape (n_samples, n_dimensions) Returns bicfloat The lower the better. fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample. sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X. score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture
sklearn.mixture.GaussianMixture class sklearn.mixture.GaussianMixture(n_components=1, *, covariance_type='full', tol=0.001, reg_covar=1e-06, max_iter=100, n_init=1, init_params='kmeans', weights_init=None, means_init=None, precisions_init=None, random_state=None, warm_start=False, verbose=0, verbose_interval=10) [source] Gaussian Mixture. Representation of a Gaussian mixture model probability distribution. This class allows to estimate the parameters of a Gaussian mixture distribution. Read more in the User Guide. New in version 0.18. Parameters n_componentsint, default=1 The number of mixture components. covariance_type{‘full’, ‘tied’, ‘diag’, ‘spherical’}, default=’full’ String describing the type of covariance parameters to use. Must be one of: ‘full’ each component has its own general covariance matrix ‘tied’ all components share the same general covariance matrix ‘diag’ each component has its own diagonal covariance matrix ‘spherical’ each component has its own single variance tolfloat, default=1e-3 The convergence threshold. EM iterations will stop when the lower bound average gain is below this threshold. reg_covarfloat, default=1e-6 Non-negative regularization added to the diagonal of covariance. Allows to assure that the covariance matrices are all positive. max_iterint, default=100 The number of EM iterations to perform. n_initint, default=1 The number of initializations to perform. The best results are kept. init_params{‘kmeans’, ‘random’}, default=’kmeans’ The method used to initialize the weights, the means and the precisions. Must be one of: 'kmeans' : responsibilities are initialized using kmeans. 'random' : responsibilities are initialized randomly. weights_initarray-like of shape (n_components, ), default=None The user-provided initial weights. If it is None, weights are initialized using the init_params method. means_initarray-like of shape (n_components, n_features), default=None The user-provided initial means, If it is None, means are initialized using the init_params method. precisions_initarray-like, default=None The user-provided initial precisions (inverse of the covariance matrices). If it is None, precisions are initialized using the ‘init_params’ method. The shape depends on ‘covariance_type’: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' random_stateint, RandomState instance or None, default=None Controls the random seed given to the method chosen to initialize the parameters (see init_params). In addition, it controls the generation of random samples from the fitted distribution (see the method sample). Pass an int for reproducible output across multiple function calls. See Glossary. warm_startbool, default=False If ‘warm_start’ is True, the solution of the last fitting is used as initialization for the next call of fit(). This can speed up convergence when fit is called several times on similar problems. In that case, ‘n_init’ is ignored and only a single initialization occurs upon the first call. See the Glossary. verboseint, default=0 Enable verbose output. If 1 then it prints the current initialization and each iteration step. If greater than 1 then it prints also the log probability and the time needed for each step. verbose_intervalint, default=10 Number of iteration done before the next print. Attributes weights_array-like of shape (n_components,) The weights of each mixture components. means_array-like of shape (n_components, n_features) The mean of each mixture component. covariances_array-like The covariance of each mixture component. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_array-like The precision matrices for each component in the mixture. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' precisions_cholesky_array-like The cholesky decomposition of the precision matrices of each mixture component. A precision matrix is the inverse of a covariance matrix. A covariance matrix is symmetric positive definite so the mixture of Gaussian can be equivalently parameterized by the precision matrices. Storing the precision matrices instead of the covariance matrices makes it more efficient to compute the log-likelihood of new samples at test time. The shape depends on covariance_type: (n_components,) if 'spherical', (n_features, n_features) if 'tied', (n_components, n_features) if 'diag', (n_components, n_features, n_features) if 'full' converged_bool True when convergence was reached in fit(), False otherwise. n_iter_int Number of step used by the best fit of EM to reach the convergence. lower_bound_float Lower bound value on the log-likelihood (of the training data with respect to the model) of the best fit of EM. See also BayesianGaussianMixture Gaussian mixture model fit with a variational inference. Examples >>> import numpy as np >>> from sklearn.mixture import GaussianMixture >>> X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]) >>> gm = GaussianMixture(n_components=2, random_state=0).fit(X) >>> gm.means_ array([[10., 2.], [ 1., 2.]]) >>> gm.predict([[0, 0], [12, 3]]) array([1, 0]) Methods aic(X) Akaike information criterion for the current model on the input X. bic(X) Bayesian information criterion for the current model on the input X. fit(X[, y]) Estimate model parameters with the EM algorithm. fit_predict(X[, y]) Estimate model parameters using X and predict the labels for X. get_params([deep]) Get parameters for this estimator. predict(X) Predict the labels for the data samples in X using trained model. predict_proba(X) Predict posterior probability of each component given the data. sample([n_samples]) Generate random samples from the fitted Gaussian distribution. score(X[, y]) Compute the per-sample average log-likelihood of the given data X. score_samples(X) Compute the weighted log probabilities for each sample. set_params(**params) Set the parameters of this estimator. aic(X) [source] Akaike information criterion for the current model on the input X. Parameters Xarray of shape (n_samples, n_dimensions) Returns aicfloat The lower the better. bic(X) [source] Bayesian information criterion for the current model on the input X. Parameters Xarray of shape (n_samples, n_dimensions) Returns bicfloat The lower the better. fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values. predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels. predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample. sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X. score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X. set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance. Examples using sklearn.mixture.GaussianMixture Comparing different clustering algorithms on toy datasets Density Estimation for a Gaussian mixture Gaussian Mixture Model Ellipsoids Gaussian Mixture Model Selection GMM covariances Gaussian Mixture Model Sine Curve
sklearn.modules.generated.sklearn.mixture.gaussianmixture
aic(X) [source] Akaike information criterion for the current model on the input X. Parameters Xarray of shape (n_samples, n_dimensions) Returns aicfloat The lower the better.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.aic
bic(X) [source] Bayesian information criterion for the current model on the input X. Parameters Xarray of shape (n_samples, n_dimensions) Returns bicfloat The lower the better.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.bic
fit(X, y=None) [source] Estimate model parameters with the EM algorithm. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. If warm_start is True, then n_init is ignored and a single initialization is performed upon the first call. Upon consecutive calls, training starts where it left off. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns self
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.fit
fit_predict(X, y=None) [source] Estimate model parameters using X and predict the labels for X. The method fits the model n_init times and sets the parameters with which the model has the largest likelihood or lower bound. Within each trial, the method iterates between E-step and M-step for max_iter times until the change of likelihood or lower bound is less than tol, otherwise, a ConvergenceWarning is raised. After fitting, it predicts the most probable label for the input data points. New in version 0.20. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.fit_predict
get_params(deep=True) [source] Get parameters for this estimator. Parameters deepbool, default=True If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns paramsdict Parameter names mapped to their values.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.get_params
predict(X) [source] Predict the labels for the data samples in X using trained model. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns labelsarray, shape (n_samples,) Component labels.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.predict
predict_proba(X) [source] Predict posterior probability of each component given the data. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns resparray, shape (n_samples, n_components) Returns the probability each Gaussian (state) in the model given each sample.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.predict_proba
sample(n_samples=1) [source] Generate random samples from the fitted Gaussian distribution. Parameters n_samplesint, default=1 Number of samples to generate. Returns Xarray, shape (n_samples, n_features) Randomly generated sample yarray, shape (nsamples,) Component labels
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.sample
score(X, y=None) [source] Compute the per-sample average log-likelihood of the given data X. Parameters Xarray-like of shape (n_samples, n_dimensions) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_likelihoodfloat Log likelihood of the Gaussian mixture given X.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.score
score_samples(X) [source] Compute the weighted log probabilities for each sample. Parameters Xarray-like of shape (n_samples, n_features) List of n_features-dimensional data points. Each row corresponds to a single data point. Returns log_probarray, shape (n_samples,) Log probabilities of each data point in X.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.score_samples
set_params(**params) [source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. Parameters **paramsdict Estimator parameters. Returns selfestimator instance Estimator instance.
sklearn.modules.generated.sklearn.mixture.gaussianmixture#sklearn.mixture.GaussianMixture.set_params
sklearn.model_selection.check_cv(cv=5, y=None, *, classifier=False) [source] Input checker utility for building a cross-validator Parameters cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross validation, - integer, to specify the number of folds. - CV splitter, - An iterable yielding (train, test) splits as arrays of indices. For integer/None inputs, if classifier is True and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value changed from 3-fold to 5-fold. yarray-like, default=None The target variable for supervised learning problems. classifierbool, default=False Whether the task is a classification task, in which case stratified KFold will be used. Returns checked_cva cross-validator instance. The return value is a cross-validator which generates the train/test splits via the split method.
sklearn.modules.generated.sklearn.model_selection.check_cv#sklearn.model_selection.check_cv
sklearn.model_selection.cross_validate(estimator, X, y=None, *, groups=None, scoring=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', return_train_score=False, return_estimator=False, error_score=nan) [source] Evaluate metric(s) by cross-validation and also record fit/score times. Read more in the User Guide. Parameters estimatorestimator object implementing ‘fit’ The object to use to fit the data. Xarray-like of shape (n_samples, n_features) The data to fit. Can be for example a list, or an array. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None The target variable to try to predict in the case of supervised learning. groupsarray-like of shape (n_samples,), default=None Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold). scoringstr, callable, list, tuple, or dict, default=None Strategy to evaluate the performance of the cross-validated model on the test set. If scoring represents a single score, one can use: a single string (see The scoring parameter: defining model evaluation rules); a callable (see Defining your scoring strategy from metric functions) that returns a single value. If scoring reprents multiple scores, one can use: a list or tuple of unique strings; a callable returning a dictionary where the keys are the metric names and the values are the metric scores; a dictionary with metric names as keys and callables a values. See Specifying multiple metrics for evaluation for an example. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold, CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. n_jobsint, default=None Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the cross-validation splits. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verboseint, default=0 The verbosity level. fit_paramsdict, default=None Parameters to pass to the fit method of the estimator. pre_dispatchint or str, default=’2*n_jobs’ Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs An int, giving the exact number of total jobs that are spawned A str, giving an expression as a function of n_jobs, as in ‘2*n_jobs’ return_train_scorebool, default=False Whether to include train scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance. New in version 0.19. Changed in version 0.21: Default value was changed from True to False return_estimatorbool, default=False Whether to return the estimators fitted on each split. New in version 0.20. error_score‘raise’ or numeric, default=np.nan Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. New in version 0.20. Returns scoresdict of float arrays of shape (n_splits,) Array of scores of the estimator for each run of the cross validation. A dict of arrays containing the score/time arrays for each scorer is returned. The possible keys for this dict are: test_score The score array for test scores on each cv split. Suffix _score in test_score changes to a specific metric like test_r2 or test_auc if there are multiple scoring metrics in the scoring parameter. train_score The score array for train scores on each cv split. Suffix _score in train_score changes to a specific metric like train_r2 or train_auc if there are multiple scoring metrics in the scoring parameter. This is available only if return_train_score parameter is True. fit_time The time for fitting the estimator on the train set for each cv split. score_time The time for scoring the estimator on the test set for each cv split. (Note time for scoring on the train set is not included even if return_train_score is set to True estimator The estimator objects for each cv split. This is available only if return_estimator parameter is set to True. See also cross_val_score Run cross-validation for single metric evaluation. cross_val_predict Get predictions from each split of cross-validation for diagnostic purposes. sklearn.metrics.make_scorer Make a scorer from a performance metric or loss function. Examples >>> from sklearn import datasets, linear_model >>> from sklearn.model_selection import cross_validate >>> from sklearn.metrics import make_scorer >>> from sklearn.metrics import confusion_matrix >>> from sklearn.svm import LinearSVC >>> diabetes = datasets.load_diabetes() >>> X = diabetes.data[:150] >>> y = diabetes.target[:150] >>> lasso = linear_model.Lasso() Single metric evaluation using cross_validate >>> cv_results = cross_validate(lasso, X, y, cv=3) >>> sorted(cv_results.keys()) ['fit_time', 'score_time', 'test_score'] >>> cv_results['test_score'] array([0.33150734, 0.08022311, 0.03531764]) Multiple metric evaluation using cross_validate (please refer the scoring parameter doc for more information) >>> scores = cross_validate(lasso, X, y, cv=3, ... scoring=('r2', 'neg_mean_squared_error'), ... return_train_score=True) >>> print(scores['test_neg_mean_squared_error']) [-3635.5... -3573.3... -6114.7...] >>> print(scores['train_r2']) [0.28010158 0.39088426 0.22784852]
sklearn.modules.generated.sklearn.model_selection.cross_validate#sklearn.model_selection.cross_validate
sklearn.model_selection.cross_val_predict(estimator, X, y=None, *, groups=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', method='predict') [source] Generate cross-validated estimates for each input data point The data is split according to the cv parameter. Each sample belongs to exactly one test set, and its prediction is computed with an estimator fitted on the corresponding training set. Passing these predictions into an evaluation metric may not be a valid way to measure generalization performance. Results can differ from cross_validate and cross_val_score unless all tests sets have equal size and the metric decomposes over samples. Read more in the User Guide. Parameters estimatorestimator object implementing ‘fit’ and ‘predict’ The object to use to fit the data. Xarray-like of shape (n_samples, n_features) The data to fit. Can be, for example a list, or an array at least 2d. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None The target variable to try to predict in the case of supervised learning. groupsarray-like of shape (n_samples,), default=None Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold). cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold, CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. n_jobsint, default=None Number of jobs to run in parallel. Training the estimator and predicting are parallelized over the cross-validation splits. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verboseint, default=0 The verbosity level. fit_paramsdict, defualt=None Parameters to pass to the fit method of the estimator. pre_dispatchint or str, default=’2*n_jobs’ Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs An int, giving the exact number of total jobs that are spawned A str, giving an expression as a function of n_jobs, as in ‘2*n_jobs’ method{‘predict’, ‘predict_proba’, ‘predict_log_proba’, ‘decision_function’}, default=’predict’ The method to be invoked by estimator. Returns predictionsndarray This is the result of calling method. Shape: When method is ‘predict’ and in special case where method is ‘decision_function’ and the target is binary: (n_samples,) When method is one of {‘predict_proba’, ‘predict_log_proba’, ‘decision_function’} (unless special case above): (n_samples, n_classes) If estimator is multioutput, an extra dimension ‘n_outputs’ is added to the end of each shape above. See also cross_val_score Calculate score for each CV split. cross_validate Calculate one or more scores and timings for each CV split. Notes In the case that one or more classes are absent in a training portion, a default score needs to be assigned to all instances for that class if method produces columns per class, as in {‘decision_function’, ‘predict_proba’, ‘predict_log_proba’}. For predict_proba this value is 0. In order to ensure finite output, we approximate negative infinity by the minimum finite float value for the dtype in other cases. Examples >>> from sklearn import datasets, linear_model >>> from sklearn.model_selection import cross_val_predict >>> diabetes = datasets.load_diabetes() >>> X = diabetes.data[:150] >>> y = diabetes.target[:150] >>> lasso = linear_model.Lasso() >>> y_pred = cross_val_predict(lasso, X, y, cv=3)
sklearn.modules.generated.sklearn.model_selection.cross_val_predict#sklearn.model_selection.cross_val_predict
sklearn.model_selection.cross_val_score(estimator, X, y=None, *, groups=None, scoring=None, cv=None, n_jobs=None, verbose=0, fit_params=None, pre_dispatch='2*n_jobs', error_score=nan) [source] Evaluate a score by cross-validation Read more in the User Guide. Parameters estimatorestimator object implementing ‘fit’ The object to use to fit the data. Xarray-like of shape (n_samples, n_features) The data to fit. Can be for example a list, or an array. yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None The target variable to try to predict in the case of supervised learning. groupsarray-like of shape (n_samples,), default=None Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold). scoringstr or callable, default=None A str (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y) which should return only a single value. Similar to cross_validate but only a single metric is permitted. If None, the estimator’s default scorer (if available) is used. cvint, cross-validation generator or an iterable, default=None Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 5-fold cross validation, int, to specify the number of folds in a (Stratified)KFold, CV splitter, An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. Refer User Guide for the various cross-validation strategies that can be used here. Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold. n_jobsint, default=None Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the cross-validation splits. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. verboseint, default=0 The verbosity level. fit_paramsdict, default=None Parameters to pass to the fit method of the estimator. pre_dispatchint or str, default=’2*n_jobs’ Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs An int, giving the exact number of total jobs that are spawned A str, giving an expression as a function of n_jobs, as in ‘2*n_jobs’ error_score‘raise’ or numeric, default=np.nan Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised. New in version 0.20. Returns scoresndarray of float of shape=(len(list(cv)),) Array of scores of the estimator for each run of the cross validation. See also cross_validate To run cross-validation on multiple metrics and also to return train scores, fit times and score times. cross_val_predict Get predictions from each split of cross-validation for diagnostic purposes. sklearn.metrics.make_scorer Make a scorer from a performance metric or loss function. Examples >>> from sklearn import datasets, linear_model >>> from sklearn.model_selection import cross_val_score >>> diabetes = datasets.load_diabetes() >>> X = diabetes.data[:150] >>> y = diabetes.target[:150] >>> lasso = linear_model.Lasso() >>> print(cross_val_score(lasso, X, y, cv=3)) [0.33150734 0.08022311 0.03531764]
sklearn.modules.generated.sklearn.model_selection.cross_val_score#sklearn.model_selection.cross_val_score