code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def _validate_multiclass_probabilistic_prediction(
y_true, y_prob, sample_weight, labels
):
r"""Convert y_true and y_prob to shape (n_samples, n_classes)
1. Verify that y_true, y_prob, and sample_weights have the same first dim
2. Ensure 2 or more classes in y_true i.e. valid classification task. The
... | Convert y_true and y_prob to shape (n_samples, n_classes)
1. Verify that y_true, y_prob, and sample_weights have the same first dim
2. Ensure 2 or more classes in y_true i.e. valid classification task. The
classes are provided by the labels argument, or inferred using y_true.
When inferring y_tru... | _validate_multiclass_probabilistic_prediction | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None):
"""Accuracy classification score.
In multilabel classification, this function computes subset accuracy:
the set of labels predicted for a sample must *exactly* match the
corresponding set of labels in y_true.
Read more in t... | Accuracy classification score.
In multilabel classification, this function computes subset accuracy:
the set of labels predicted for a sample must *exactly* match the
corresponding set of labels in y_true.
Read more in the :ref:`User Guide <accuracy_score>`.
Parameters
----------
y_true :... | accuracy_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def confusion_matrix(
y_true, y_pred, *, labels=None, sample_weight=None, normalize=None
):
"""Compute confusion matrix to evaluate the accuracy of a classification.
By definition a confusion matrix :math:`C` is such that :math:`C_{i, j}`
is equal to the number of observations known to be in group :mat... | Compute confusion matrix to evaluate the accuracy of a classification.
By definition a confusion matrix :math:`C` is such that :math:`C_{i, j}`
is equal to the number of observations known to be in group :math:`i` and
predicted to be in group :math:`j`.
Thus in binary classification, the count of true... | confusion_matrix | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def multilabel_confusion_matrix(
y_true, y_pred, *, sample_weight=None, labels=None, samplewise=False
):
"""Compute a confusion matrix for each class or sample.
.. versionadded:: 0.21
Compute class-wise (default) or sample-wise (samplewise=True) multilabel
confusion matrix to evaluate the accuracy... | Compute a confusion matrix for each class or sample.
.. versionadded:: 0.21
Compute class-wise (default) or sample-wise (samplewise=True) multilabel
confusion matrix to evaluate the accuracy of a classification, and output
confusion matrices for each class or sample.
In multilabel confusion matri... | multilabel_confusion_matrix | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def jaccard_score(
y_true,
y_pred,
*,
labels=None,
pos_label=1,
average="binary",
sample_weight=None,
zero_division="warn",
):
"""Jaccard similarity coefficient score.
The Jaccard index [1], or Jaccard similarity coefficient, defined as
the size of the intersection divided b... | Jaccard similarity coefficient score.
The Jaccard index [1], or Jaccard similarity coefficient, defined as
the size of the intersection divided by the size of the union of two label
sets, is used to compare set of predicted labels for a sample to the
corresponding set of labels in ``y_true``.
Supp... | jaccard_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def matthews_corrcoef(y_true, y_pred, *, sample_weight=None):
"""Compute the Matthews correlation coefficient (MCC).
The Matthews correlation coefficient is used in machine learning as a
measure of the quality of binary and multiclass classifications. It takes
into account true and false positives and ... | Compute the Matthews correlation coefficient (MCC).
The Matthews correlation coefficient is used in machine learning as a
measure of the quality of binary and multiclass classifications. It takes
into account true and false positives and negatives and is generally
regarded as a balanced measure which c... | matthews_corrcoef | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def zero_one_loss(y_true, y_pred, *, normalize=True, sample_weight=None):
"""Zero-one classification loss.
If normalize is ``True``, return the fraction of misclassifications
(float), else it returns the number of misclassifications (int). The best
performance is 0.
Read more in the :ref:`User Gui... | Zero-one classification loss.
If normalize is ``True``, return the fraction of misclassifications
(float), else it returns the number of misclassifications (int). The best
performance is 0.
Read more in the :ref:`User Guide <zero_one_loss>`.
Parameters
----------
y_true : 1d array-like, o... | zero_one_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def f1_score(
y_true,
y_pred,
*,
labels=None,
pos_label=1,
average="binary",
sample_weight=None,
zero_division="warn",
):
"""Compute the F1 score, also known as balanced F-score or F-measure.
The F1 score can be interpreted as a harmonic mean of the precision and
recall, whe... | Compute the F1 score, also known as balanced F-score or F-measure.
The F1 score can be interpreted as a harmonic mean of the precision and
recall, where an F1 score reaches its best value at 1 and worst score at 0.
The relative contribution of precision and recall to the F1 score are
equal. The formula... | f1_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def fbeta_score(
y_true,
y_pred,
*,
beta,
labels=None,
pos_label=1,
average="binary",
sample_weight=None,
zero_division="warn",
):
"""Compute the F-beta score.
The F-beta score is the weighted harmonic mean of precision and recall,
reaching its optimal value at 1 and its... | Compute the F-beta score.
The F-beta score is the weighted harmonic mean of precision and recall,
reaching its optimal value at 1 and its worst value at 0.
The `beta` parameter represents the ratio of recall importance to
precision importance. `beta > 1` gives more weight to recall, while
`beta < ... | fbeta_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def _prf_divide(
numerator, denominator, metric, modifier, average, warn_for, zero_division="warn"
):
"""Performs division and handles divide-by-zero.
On zero-division, sets the corresponding result elements equal to
0, 1 or np.nan (according to ``zero_division``). Plus, if
``zero_division != "warn... | Performs division and handles divide-by-zero.
On zero-division, sets the corresponding result elements equal to
0, 1 or np.nan (according to ``zero_division``). Plus, if
``zero_division != "warn"`` raises a warning.
The metric, modifier and average arguments are used only for determining
an approp... | _prf_divide | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def _check_set_wise_labels(y_true, y_pred, average, labels, pos_label):
"""Validation associated with set-wise metrics.
Returns identified labels.
"""
average_options = (None, "micro", "macro", "weighted", "samples")
if average not in average_options and average != "binary":
raise ValueErro... | Validation associated with set-wise metrics.
Returns identified labels.
| _check_set_wise_labels | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def precision_recall_fscore_support(
y_true,
y_pred,
*,
beta=1.0,
labels=None,
pos_label=1,
average=None,
warn_for=("precision", "recall", "f-score"),
sample_weight=None,
zero_division="warn",
):
"""Compute precision, recall, F-measure and support for each class.
The pre... | Compute precision, recall, F-measure and support for each class.
The precision is the ratio ``tp / (tp + fp)`` where ``tp`` is the number of
true positives and ``fp`` the number of false positives. The precision is
intuitively the ability of the classifier not to label a negative sample as
positive.
... | precision_recall_fscore_support | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def class_likelihood_ratios(
y_true,
y_pred,
*,
labels=None,
sample_weight=None,
raise_warning="deprecated",
replace_undefined_by=np.nan,
):
"""Compute binary classification positive and negative likelihood ratios.
The positive likelihood ratio is `LR+ = sensitivity / (1 - specifici... | Compute binary classification positive and negative likelihood ratios.
The positive likelihood ratio is `LR+ = sensitivity / (1 - specificity)`
where the sensitivity or recall is the ratio `tp / (tp + fn)` and the
specificity is `tn / (tn + fp)`. The negative likelihood ratio is `LR- = (1
- sensitivity... | class_likelihood_ratios | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def precision_score(
y_true,
y_pred,
*,
labels=None,
pos_label=1,
average="binary",
sample_weight=None,
zero_division="warn",
):
"""Compute the precision.
The precision is the ratio ``tp / (tp + fp)`` where ``tp`` is the number of
true positives and ``fp`` the number of fals... | Compute the precision.
The precision is the ratio ``tp / (tp + fp)`` where ``tp`` is the number of
true positives and ``fp`` the number of false positives. The precision is
intuitively the ability of the classifier not to label as positive a sample
that is negative.
The best value is 1 and the wor... | precision_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def recall_score(
y_true,
y_pred,
*,
labels=None,
pos_label=1,
average="binary",
sample_weight=None,
zero_division="warn",
):
"""Compute the recall.
The recall is the ratio ``tp / (tp + fn)`` where ``tp`` is the number of
true positives and ``fn`` the number of false negativ... | Compute the recall.
The recall is the ratio ``tp / (tp + fn)`` where ``tp`` is the number of
true positives and ``fn`` the number of false negatives. The recall is
intuitively the ability of the classifier to find all the positive samples.
The best value is 1 and the worst value is 0.
Support bey... | recall_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def classification_report(
y_true,
y_pred,
*,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False,
zero_division="warn",
):
"""Build a text report showing the main classification metrics.
Read more in the :ref:`User Guide <classification_report>`.... | Build a text report showing the main classification metrics.
Read more in the :ref:`User Guide <classification_report>`.
Parameters
----------
y_true : 1d array-like, or label indicator array / sparse matrix
Ground truth (correct) target values.
y_pred : 1d array-like, or label indicator ... | classification_report | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def hamming_loss(y_true, y_pred, *, sample_weight=None):
"""Compute the average Hamming loss.
The Hamming loss is the fraction of labels that are incorrectly predicted.
Read more in the :ref:`User Guide <hamming_loss>`.
Parameters
----------
y_true : 1d array-like, or label indicator array / ... | Compute the average Hamming loss.
The Hamming loss is the fraction of labels that are incorrectly predicted.
Read more in the :ref:`User Guide <hamming_loss>`.
Parameters
----------
y_true : 1d array-like, or label indicator array / sparse matrix
Ground truth (correct) labels.
y_pred... | hamming_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def log_loss(y_true, y_pred, *, normalize=True, sample_weight=None, labels=None):
r"""Log loss, aka logistic loss or cross-entropy loss.
This is the loss function used in (multinomial) logistic regression
and extensions of it such as neural networks, defined as the negative
log-likelihood of a logistic... | Log loss, aka logistic loss or cross-entropy loss.
This is the loss function used in (multinomial) logistic regression
and extensions of it such as neural networks, defined as the negative
log-likelihood of a logistic model that returns ``y_pred`` probabilities
for its training data ``y_true``.
The... | log_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def hinge_loss(y_true, pred_decision, *, labels=None, sample_weight=None):
"""Average hinge loss (non-regularized).
In binary class case, assuming labels in y_true are encoded with +1 and -1,
when a prediction mistake is made, ``margin = y_true * pred_decision`` is
always negative (since the signs disa... | Average hinge loss (non-regularized).
In binary class case, assuming labels in y_true are encoded with +1 and -1,
when a prediction mistake is made, ``margin = y_true * pred_decision`` is
always negative (since the signs disagree), implying ``1 - margin`` is
always greater than 1. The cumulated hinge ... | hinge_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def _validate_binary_probabilistic_prediction(y_true, y_prob, sample_weight, pos_label):
r"""Convert y_true and y_prob in binary classification to shape (n_samples, 2)
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_prob : array-like of shape (n_samples,)
... | Convert y_true and y_prob in binary classification to shape (n_samples, 2)
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_prob : array-like of shape (n_samples,)
Probabilities of the positive class.
sample_weight : array-like of shape (n_samples,), ... | _validate_binary_probabilistic_prediction | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def brier_score_loss(
y_true,
y_proba,
*,
sample_weight=None,
pos_label=None,
labels=None,
scale_by_half="auto",
):
r"""Compute the Brier score loss.
The smaller the Brier score loss, the better, hence the naming with "loss".
The Brier score measures the mean squared difference ... | Compute the Brier score loss.
The smaller the Brier score loss, the better, hence the naming with "loss".
The Brier score measures the mean squared difference between the predicted
probability and the actual outcome. The Brier score is a strictly proper scoring
rule.
Read more in the :ref:`User Gu... | brier_score_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def d2_log_loss_score(y_true, y_pred, *, sample_weight=None, labels=None):
"""
:math:`D^2` score function, fraction of log loss explained.
Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always predicts the per-class proportions
of `y_tru... |
:math:`D^2` score function, fraction of log loss explained.
Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always predicts the per-class proportions
of `y_true`, disregarding the input features, gets a D^2 score of 0.0.
Read more in th... | d2_log_loss_score | python | scikit-learn/scikit-learn | sklearn/metrics/_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_classification.py | BSD-3-Clause |
def auc(x, y):
"""Compute Area Under the Curve (AUC) using the trapezoidal rule.
This is a general function, given points on a curve. For computing the
area under the ROC-curve, see :func:`roc_auc_score`. For an alternative
way to summarize a precision-recall curve, see
:func:`average_precision_s... | Compute Area Under the Curve (AUC) using the trapezoidal rule.
This is a general function, given points on a curve. For computing the
area under the ROC-curve, see :func:`roc_auc_score`. For an alternative
way to summarize a precision-recall curve, see
:func:`average_precision_score`.
Parameters... | auc | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def average_precision_score(
y_true, y_score, *, average="macro", pos_label=1, sample_weight=None
):
"""Compute average precision (AP) from prediction scores.
AP summarizes a precision-recall curve as the weighted mean of precisions
achieved at each threshold, with the increase in recall from the previ... | Compute average precision (AP) from prediction scores.
AP summarizes a precision-recall curve as the weighted mean of precisions
achieved at each threshold, with the increase in recall from the previous
threshold used as the weight:
.. math::
\text{AP} = \sum_n (R_n - R_{n-1}) P_n
where :... | average_precision_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def det_curve(
y_true, y_score, pos_label=None, sample_weight=None, drop_intermediate=False
):
"""Compute Detection Error Tradeoff (DET) for different probability thresholds.
.. note::
This metric is used for evaluation of ranking and error tradeoffs of
a binary classification task.
Read... | Compute Detection Error Tradeoff (DET) for different probability thresholds.
.. note::
This metric is used for evaluation of ranking and error tradeoffs of
a binary classification task.
Read more in the :ref:`User Guide <det_curve>`.
.. versionadded:: 0.24
.. versionchanged:: 1.7
... | det_curve | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def roc_auc_score(
y_true,
y_score,
*,
average="macro",
sample_weight=None,
max_fpr=None,
multi_class="raise",
labels=None,
):
"""Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) \
from prediction scores.
Note: this implementation can be used with bin... | Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
Note: this implementation can be used with binary, multiclass and
multilabel classification, but some restrictions apply (see Parameters).
Read more in the :ref:`User Guide <roc_metrics>`.
Parameters
... | roc_auc_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def _multiclass_roc_auc_score(
y_true, y_score, labels, multi_class, average, sample_weight
):
"""Multiclass roc auc score.
Parameters
----------
y_true : array-like of shape (n_samples,)
True multiclass labels.
y_score : array-like of shape (n_samples, n_classes)
Target scores... | Multiclass roc auc score.
Parameters
----------
y_true : array-like of shape (n_samples,)
True multiclass labels.
y_score : array-like of shape (n_samples, n_classes)
Target scores corresponding to probability estimates of a sample
belonging to a particular class
labels : ... | _multiclass_roc_auc_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def _binary_clf_curve(y_true, y_score, pos_label=None, sample_weight=None):
"""Calculate true and false positives per binary classification threshold.
Parameters
----------
y_true : ndarray of shape (n_samples,)
True targets of binary classification.
y_score : ndarray of shape (n_samples,)... | Calculate true and false positives per binary classification threshold.
Parameters
----------
y_true : ndarray of shape (n_samples,)
True targets of binary classification.
y_score : ndarray of shape (n_samples,)
Estimated probabilities or output of a decision function.
pos_label :... | _binary_clf_curve | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def precision_recall_curve(
y_true,
y_score,
*,
pos_label=None,
sample_weight=None,
drop_intermediate=False,
):
"""Compute precision-recall pairs for different probability thresholds.
Note: this implementation is restricted to the binary classification task.
The precision is the ra... | Compute precision-recall pairs for different probability thresholds.
Note: this implementation is restricted to the binary classification task.
The precision is the ratio ``tp / (tp + fp)`` where ``tp`` is the number of
true positives and ``fp`` the number of false positives. The precision is
intuitiv... | precision_recall_curve | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def roc_curve(
y_true, y_score, *, pos_label=None, sample_weight=None, drop_intermediate=True
):
"""Compute Receiver operating characteristic (ROC).
Note: this implementation is restricted to the binary classification task.
Read more in the :ref:`User Guide <roc_metrics>`.
Parameters
--------... | Compute Receiver operating characteristic (ROC).
Note: this implementation is restricted to the binary classification task.
Read more in the :ref:`User Guide <roc_metrics>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
True binary labels. If labels are not either {-1, 1... | roc_curve | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def label_ranking_average_precision_score(y_true, y_score, *, sample_weight=None):
"""Compute ranking-based average precision.
Label ranking average precision (LRAP) is the average over each ground
truth label assigned to each sample, of the ratio of true vs. total
labels with lower score.
This me... | Compute ranking-based average precision.
Label ranking average precision (LRAP) is the average over each ground
truth label assigned to each sample, of the ratio of true vs. total
labels with lower score.
This metric is used in multilabel ranking problem, where the goal
is to give better rank to t... | label_ranking_average_precision_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def coverage_error(y_true, y_score, *, sample_weight=None):
"""Coverage error measure.
Compute how far we need to go through the ranked scores to cover all
true labels. The best value is equal to the average number
of labels in ``y_true`` per sample.
Ties in ``y_scores`` are broken by giving maxim... | Coverage error measure.
Compute how far we need to go through the ranked scores to cover all
true labels. The best value is equal to the average number
of labels in ``y_true`` per sample.
Ties in ``y_scores`` are broken by giving maximal rank that would have
been assigned to all tied values.
... | coverage_error | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def label_ranking_loss(y_true, y_score, *, sample_weight=None):
"""Compute Ranking loss measure.
Compute the average number of label pairs that are incorrectly ordered
given y_score weighted by the size of the label set and the number of
labels not in the label set.
This is similar to the error se... | Compute Ranking loss measure.
Compute the average number of label pairs that are incorrectly ordered
given y_score weighted by the size of the label set and the number of
labels not in the label set.
This is similar to the error set size, but weighted by the number of
relevant and irrelevant label... | label_ranking_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def _dcg_sample_scores(y_true, y_score, k=None, log_base=2, ignore_ties=False):
"""Compute Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount.
This ranking metric yields a high value if true labels are ranked high ... | Compute Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount.
This ranking metric yields a high value if true labels are ranked high by
``y_score``.
Parameters
----------
y_true : ndarray of shape (n_sam... | _dcg_sample_scores | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def _tie_averaged_dcg(y_true, y_score, discount_cumsum):
"""
Compute DCG by averaging over possible permutations of ties.
The gain (`y_true`) of an index falling inside a tied group (in the order
induced by `y_score`) is replaced by the average gain within this group.
The discounted gain for a tied... |
Compute DCG by averaging over possible permutations of ties.
The gain (`y_true`) of an index falling inside a tied group (in the order
induced by `y_score`) is replaced by the average gain within this group.
The discounted gain for a tied group is then the average `y_true` within
this group times ... | _tie_averaged_dcg | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def dcg_score(
y_true, y_score, *, k=None, log_base=2, sample_weight=None, ignore_ties=False
):
"""Compute Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount.
This ranking metric yields a high value if true lab... | Compute Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount.
This ranking metric yields a high value if true labels are ranked high by
``y_score``.
Usually the Normalized Discounted Cumulative Gain (NDCG, compu... | dcg_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def _ndcg_sample_scores(y_true, y_score, k=None, ignore_ties=False):
"""Compute Normalized Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount. Then divide by the best possible
score (Ideal DCG, obtained for a perfec... | Compute Normalized Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount. Then divide by the best possible
score (Ideal DCG, obtained for a perfect ranking) to obtain a score between
0 and 1.
This ranking metric y... | _ndcg_sample_scores | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def ndcg_score(y_true, y_score, *, k=None, sample_weight=None, ignore_ties=False):
"""Compute Normalized Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount. Then divide by the best possible
score (Ideal DCG, obtaine... | Compute Normalized Discounted Cumulative Gain.
Sum the true scores ranked in the order induced by the predicted scores,
after applying a logarithmic discount. Then divide by the best possible
score (Ideal DCG, obtained for a perfect ranking) to obtain a score between
0 and 1.
This ranking metric r... | ndcg_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def top_k_accuracy_score(
y_true, y_score, *, k=2, normalize=True, sample_weight=None, labels=None
):
"""Top-k Accuracy classification score.
This metric computes the number of times where the correct label is among
the top `k` labels predicted (ranked by predicted scores). Note that the
multilabel... | Top-k Accuracy classification score.
This metric computes the number of times where the correct label is among
the top `k` labels predicted (ranked by predicted scores). Note that the
multilabel case isn't covered here.
Read more in the :ref:`User Guide <top_k_accuracy_score>`
Parameters
----... | top_k_accuracy_score | python | scikit-learn/scikit-learn | sklearn/metrics/_ranking.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py | BSD-3-Clause |
def _check_reg_targets(
y_true, y_pred, sample_weight, multioutput, dtype="numeric", xp=None
):
"""Check that y_true, y_pred and sample_weight belong to the same regression task.
To reduce redundancy when calling `_find_matching_floating_dtype`,
please use `_check_reg_targets_with_floating_dtype` inste... | Check that y_true, y_pred and sample_weight belong to the same regression task.
To reduce redundancy when calling `_find_matching_floating_dtype`,
please use `_check_reg_targets_with_floating_dtype` instead.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
... | _check_reg_targets | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def _check_reg_targets_with_floating_dtype(
y_true, y_pred, sample_weight, multioutput, xp=None
):
"""Ensures y_true, y_pred, and sample_weight correspond to same regression task.
Extends `_check_reg_targets` by automatically selecting a suitable floating-point
data type for inputs using `_find_matchin... | Ensures y_true, y_pred, and sample_weight correspond to same regression task.
Extends `_check_reg_targets` by automatically selecting a suitable floating-point
data type for inputs using `_find_matching_floating_dtype`.
Use this private method only when converting inputs to array API-compatibles.
Par... | _check_reg_targets_with_floating_dtype | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def mean_absolute_error(
y_true, y_pred, *, sample_weight=None, multioutput="uniform_average"
):
"""Mean absolute error regression loss.
The mean absolute error is a non-negative floating point value, where best value
is 0.0. Read more in the :ref:`User Guide <mean_absolute_error>`.
Parameters
... | Mean absolute error regression loss.
The mean absolute error is a non-negative floating point value, where best value
is 0.0. Read more in the :ref:`User Guide <mean_absolute_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (co... | mean_absolute_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def mean_pinball_loss(
y_true, y_pred, *, sample_weight=None, alpha=0.5, multioutput="uniform_average"
):
"""Pinball loss for quantile regression.
Read more in the :ref:`User Guide <pinball_loss>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
... | Pinball loss for quantile regression.
Read more in the :ref:`User Guide <pinball_loss>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
... | mean_pinball_loss | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def mean_absolute_percentage_error(
y_true, y_pred, *, sample_weight=None, multioutput="uniform_average"
):
"""Mean absolute percentage error (MAPE) regression loss.
Note that we are not using the common "percentage" definition: the percentage
in the range [0, 100] is converted to a relative value in t... | Mean absolute percentage error (MAPE) regression loss.
Note that we are not using the common "percentage" definition: the percentage
in the range [0, 100] is converted to a relative value in the range [0, 1]
by dividing by 100. Thus, an error of 200% corresponds to a relative error of 2.
Read more in ... | mean_absolute_percentage_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def mean_squared_error(
y_true,
y_pred,
*,
sample_weight=None,
multioutput="uniform_average",
):
"""Mean squared error regression loss.
Read more in the :ref:`User Guide <mean_squared_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outp... | Mean squared error regression loss.
Read more in the :ref:`User Guide <mean_squared_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
... | mean_squared_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def root_mean_squared_error(
y_true, y_pred, *, sample_weight=None, multioutput="uniform_average"
):
"""Root mean squared error regression loss.
Read more in the :ref:`User Guide <mean_squared_error>`.
.. versionadded:: 1.4
Parameters
----------
y_true : array-like of shape (n_samples,) o... | Root mean squared error regression loss.
Read more in the :ref:`User Guide <mean_squared_error>`.
.. versionadded:: 1.4
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samp... | root_mean_squared_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def mean_squared_log_error(
y_true,
y_pred,
*,
sample_weight=None,
multioutput="uniform_average",
):
"""Mean squared logarithmic error regression loss.
Read more in the :ref:`User Guide <mean_squared_log_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) o... | Mean squared logarithmic error regression loss.
Read more in the :ref:`User Guide <mean_squared_log_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,) or (n_samp... | mean_squared_log_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def root_mean_squared_log_error(
y_true, y_pred, *, sample_weight=None, multioutput="uniform_average"
):
"""Root mean squared logarithmic error regression loss.
Read more in the :ref:`User Guide <mean_squared_log_error>`.
.. versionadded:: 1.4
Parameters
----------
y_true : array-like of ... | Root mean squared logarithmic error regression loss.
Read more in the :ref:`User Guide <mean_squared_log_error>`.
.. versionadded:: 1.4
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (correct) target values.
y_pred : array-like ... | root_mean_squared_log_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def median_absolute_error(
y_true, y_pred, *, multioutput="uniform_average", sample_weight=None
):
"""Median absolute error regression loss.
Median absolute error output is non-negative floating point. The best value
is 0.0. Read more in the :ref:`User Guide <median_absolute_error>`.
Parameters
... | Median absolute error regression loss.
Median absolute error output is non-negative floating point. The best value
is 0.0. Read more in the :ref:`User Guide <median_absolute_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
Ground truth (cor... | median_absolute_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def _assemble_r2_explained_variance(
numerator, denominator, n_outputs, multioutput, force_finite, xp, device
):
"""Common part used by explained variance score and :math:`R^2` score."""
dtype = numerator.dtype
nonzero_denominator = denominator != 0
if not force_finite:
# Standard formula,... | Common part used by explained variance score and :math:`R^2` score. | _assemble_r2_explained_variance | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def explained_variance_score(
y_true,
y_pred,
*,
sample_weight=None,
multioutput="uniform_average",
force_finite=True,
):
"""Explained variance regression score function.
Best possible score is 1.0, lower values are worse.
In the particular case when ``y_true`` is constant, the exp... | Explained variance regression score function.
Best possible score is 1.0, lower values are worse.
In the particular case when ``y_true`` is constant, the explained variance
score is not finite: it is either ``NaN`` (perfect predictions) or
``-Inf`` (imperfect predictions). To prevent such non-finite n... | explained_variance_score | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def r2_score(
y_true,
y_pred,
*,
sample_weight=None,
multioutput="uniform_average",
force_finite=True,
):
""":math:`R^2` (coefficient of determination) regression score function.
Best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). In the g... | :math:`R^2` (coefficient of determination) regression score function.
Best possible score is 1.0 and it can be negative (because the
model can be arbitrarily worse). In the general case when the true y is
non-constant, a constant model that always predicts the average y
disregarding the input features ... | r2_score | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def max_error(y_true, y_pred):
"""
The max_error metric calculates the maximum residual error.
Read more in the :ref:`User Guide <max_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samp... |
The max_error metric calculates the maximum residual error.
Read more in the :ref:`User Guide <max_error>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,)
Estimated target values.... | max_error | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def mean_tweedie_deviance(y_true, y_pred, *, sample_weight=None, power=0):
"""Mean Tweedie deviance regression loss.
Read more in the :ref:`User Guide <mean_tweedie_deviance>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred... | Mean Tweedie deviance regression loss.
Read more in the :ref:`User Guide <mean_tweedie_deviance>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,)
Estimated target values.
sample_w... | mean_tweedie_deviance | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def d2_tweedie_score(y_true, y_pred, *, sample_weight=None, power=0):
"""
:math:`D^2` regression score function, fraction of Tweedie deviance explained.
Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always uses the empirical mean of `y_true... |
:math:`D^2` regression score function, fraction of Tweedie deviance explained.
Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always uses the empirical mean of `y_true` as
constant prediction, disregarding the input features, gets a D^2 sco... | d2_tweedie_score | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def d2_absolute_error_score(
y_true, y_pred, *, sample_weight=None, multioutput="uniform_average"
):
"""
:math:`D^2` regression score function, fraction of absolute error explained.
Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always u... |
:math:`D^2` regression score function, fraction of absolute error explained.
Best possible score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A model that always uses the empirical median of `y_true`
as constant prediction, disregarding the input features,
gets a :ma... | d2_absolute_error_score | python | scikit-learn/scikit-learn | sklearn/metrics/_regression.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_regression.py | BSD-3-Clause |
def _cached_call(cache, estimator, response_method, *args, **kwargs):
"""Call estimator with method and args and kwargs."""
if cache is not None and response_method in cache:
return cache[response_method]
result, _ = _get_response_values(
estimator, *args, response_method=response_method, *... | Call estimator with method and args and kwargs. | _cached_call | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _use_cache(self, estimator):
"""Return True if using a cache is beneficial, thus when a response method will
be called several time.
"""
if len(self._scorers) == 1: # Only one scorer
return False
counter = Counter(
[
_check_response_m... | Return True if using a cache is beneficial, thus when a response method will
be called several time.
| _use_cache | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def get_metadata_routing(self):
"""Get metadata routing of this object.
Please check :ref:`User Guide <metadata_routing>` on how the routing
mechanism works.
.. versionadded:: 1.3
Returns
-------
routing : MetadataRouter
A :class:`~utils.metadata_ro... | Get metadata routing of this object.
Please check :ref:`User Guide <metadata_routing>` on how the routing
mechanism works.
.. versionadded:: 1.3
Returns
-------
routing : MetadataRouter
A :class:`~utils.metadata_routing.MetadataRouter` encapsulating
... | get_metadata_routing | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def __call__(self, estimator, X, y_true, sample_weight=None, **kwargs):
"""Evaluate predicted target values for X relative to y_true.
Parameters
----------
estimator : object
Trained estimator to use for scoring. Must have a predict_proba
method; the output of th... | Evaluate predicted target values for X relative to y_true.
Parameters
----------
estimator : object
Trained estimator to use for scoring. Must have a predict_proba
method; the output of that is used to compute the score.
X : {array-like, sparse matrix}
... | __call__ | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _warn_overlap(self, message, kwargs):
"""Warn if there is any overlap between ``self._kwargs`` and ``kwargs``.
This method is intended to be used to check for overlap between
``self._kwargs`` and ``kwargs`` passed as metadata.
"""
_kwargs = set() if self._kwargs is None else... | Warn if there is any overlap between ``self._kwargs`` and ``kwargs``.
This method is intended to be used to check for overlap between
``self._kwargs`` and ``kwargs`` passed as metadata.
| _warn_overlap | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def set_score_request(self, **kwargs):
"""Set requested parameters by the scorer.
Please see :ref:`User Guide <metadata_routing>` on how the routing
mechanism works.
.. versionadded:: 1.3
Parameters
----------
kwargs : dict
Arguments should be of th... | Set requested parameters by the scorer.
Please see :ref:`User Guide <metadata_routing>` on how the routing
mechanism works.
.. versionadded:: 1.3
Parameters
----------
kwargs : dict
Arguments should be of the form ``param_name=alias``, and `alias`
... | set_score_request | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _score(self, method_caller, estimator, X, y_true, **kwargs):
"""Evaluate the response method of `estimator` on `X` and `y_true`.
Parameters
----------
method_caller : callable
Returns predictions given an estimator, method name, and other
arguments, potential... | Evaluate the response method of `estimator` on `X` and `y_true`.
Parameters
----------
method_caller : callable
Returns predictions given an estimator, method name, and other
arguments, potentially caching results.
estimator : object
Trained estimato... | _score | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def get_scorer(scoring):
"""Get a scorer from string.
Read more in the :ref:`User Guide <scoring_parameter>`.
:func:`~sklearn.metrics.get_scorer_names` can be used to retrieve the names
of all available scorers.
Parameters
----------
scoring : str, callable or None
Scoring method a... | Get a scorer from string.
Read more in the :ref:`User Guide <scoring_parameter>`.
:func:`~sklearn.metrics.get_scorer_names` can be used to retrieve the names
of all available scorers.
Parameters
----------
scoring : str, callable or None
Scoring method as string. If callable it is retu... | get_scorer | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def set_score_request(self, **kwargs):
"""Set requested parameters by the scorer.
Please see :ref:`User Guide <metadata_routing>` on how the routing
mechanism works.
.. versionadded:: 1.5
Parameters
----------
kwargs : dict
Arguments should be of th... | Set requested parameters by the scorer.
Please see :ref:`User Guide <metadata_routing>` on how the routing
mechanism works.
.. versionadded:: 1.5
Parameters
----------
kwargs : dict
Arguments should be of the form ``param_name=alias``, and `alias`
... | set_score_request | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _check_multimetric_scoring(estimator, scoring):
"""Check the scoring parameter in cases when multiple metrics are allowed.
In addition, multimetric scoring leverages a caching mechanism to not call the same
estimator response method multiple times. Hence, the scorer is modified to only use
a single... | Check the scoring parameter in cases when multiple metrics are allowed.
In addition, multimetric scoring leverages a caching mechanism to not call the same
estimator response method multiple times. Hence, the scorer is modified to only use
a single response method given a list of response methods and the e... | _check_multimetric_scoring | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def make_scorer(
score_func, *, response_method="default", greater_is_better=True, **kwargs
):
"""Make a scorer from a performance metric or loss function.
A scorer is a wrapper around an arbitrary metric or loss function that is called
with the signature `scorer(estimator, X, y_true, **kwargs)`.
... | Make a scorer from a performance metric or loss function.
A scorer is a wrapper around an arbitrary metric or loss function that is called
with the signature `scorer(estimator, X, y_true, **kwargs)`.
It is accepted in all scikit-learn estimators or functions allowing a `scoring`
parameter.
The pa... | make_scorer | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def check_scoring(estimator=None, scoring=None, *, allow_none=False, raise_exc=True):
"""Determine scorer from user options.
A TypeError will be thrown if the estimator cannot be scored.
Parameters
----------
estimator : estimator object implementing 'fit' or None, default=None
The object ... | Determine scorer from user options.
A TypeError will be thrown if the estimator cannot be scored.
Parameters
----------
estimator : estimator object implementing 'fit' or None, default=None
The object to use to fit the data. If `None`, then this function may error
depending on `allow_n... | check_scoring | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _threshold_scores_to_class_labels(y_score, threshold, classes, pos_label):
"""Threshold `y_score` and return the associated class labels."""
if pos_label is None:
map_thresholded_score_to_label = np.array([0, 1])
else:
pos_label_idx = np.flatnonzero(classes == pos_label)[0]
neg_l... | Threshold `y_score` and return the associated class labels. | _threshold_scores_to_class_labels | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def from_scorer(cls, scorer, response_method, thresholds):
"""Create a continuous scorer from a normal scorer."""
instance = cls(
score_func=scorer._score_func,
sign=scorer._sign,
response_method=response_method,
thresholds=thresholds,
kwargs=s... | Create a continuous scorer from a normal scorer. | from_scorer | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _score(self, method_caller, estimator, X, y_true, **kwargs):
"""Evaluate predicted target values for X relative to y_true.
Parameters
----------
method_caller : callable
Returns predictions given an estimator, method name, and other
arguments, potentially cac... | Evaluate predicted target values for X relative to y_true.
Parameters
----------
method_caller : callable
Returns predictions given an estimator, method name, and other
arguments, potentially caching results.
estimator : object
Trained estimator to u... | _score | python | scikit-learn/scikit-learn | sklearn/metrics/_scorer.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_scorer.py | BSD-3-Clause |
def _check_rows_and_columns(a, b):
"""Unpacks the row and column arrays and checks their shape."""
check_consistent_length(*a)
check_consistent_length(*b)
checks = lambda x: check_array(x, ensure_2d=False)
a_rows, a_cols = map(checks, a)
b_rows, b_cols = map(checks, b)
return a_rows, a_cols,... | Unpacks the row and column arrays and checks their shape. | _check_rows_and_columns | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_bicluster.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_bicluster.py | BSD-3-Clause |
def _jaccard(a_rows, a_cols, b_rows, b_cols):
"""Jaccard coefficient on the elements of the two biclusters."""
intersection = (a_rows * b_rows).sum() * (a_cols * b_cols).sum()
a_size = a_rows.sum() * a_cols.sum()
b_size = b_rows.sum() * b_cols.sum()
return intersection / (a_size + b_size - interse... | Jaccard coefficient on the elements of the two biclusters. | _jaccard | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_bicluster.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_bicluster.py | BSD-3-Clause |
def _pairwise_similarity(a, b, similarity):
"""Computes pairwise similarity matrix.
result[i, j] is the Jaccard coefficient of a's bicluster i and b's
bicluster j.
"""
a_rows, a_cols, b_rows, b_cols = _check_rows_and_columns(a, b)
n_a = a_rows.shape[0]
n_b = b_rows.shape[0]
result = np... | Computes pairwise similarity matrix.
result[i, j] is the Jaccard coefficient of a's bicluster i and b's
bicluster j.
| _pairwise_similarity | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_bicluster.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_bicluster.py | BSD-3-Clause |
def consensus_score(a, b, *, similarity="jaccard"):
"""The similarity of two sets of biclusters.
Similarity between individual biclusters is computed. Then the best
matching between sets is found by solving a linear sum assignment problem,
using a modified Jonker-Volgenant algorithm.
The final scor... | The similarity of two sets of biclusters.
Similarity between individual biclusters is computed. Then the best
matching between sets is found by solving a linear sum assignment problem,
using a modified Jonker-Volgenant algorithm.
The final score is the sum of similarities divided by the size of
the... | consensus_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_bicluster.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_bicluster.py | BSD-3-Clause |
def check_clusterings(labels_true, labels_pred):
"""Check that the labels arrays are 1D and of same dimension.
Parameters
----------
labels_true : array-like of shape (n_samples,)
The true labels.
labels_pred : array-like of shape (n_samples,)
The predicted labels.
"""
labe... | Check that the labels arrays are 1D and of same dimension.
Parameters
----------
labels_true : array-like of shape (n_samples,)
The true labels.
labels_pred : array-like of shape (n_samples,)
The predicted labels.
| check_clusterings | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def _generalized_average(U, V, average_method):
"""Return a particular mean of two numbers."""
if average_method == "min":
return min(U, V)
elif average_method == "geometric":
return np.sqrt(U * V)
elif average_method == "arithmetic":
return np.mean([U, V])
elif average_metho... | Return a particular mean of two numbers. | _generalized_average | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def contingency_matrix(
labels_true, labels_pred, *, eps=None, sparse=False, dtype=np.int64
):
"""Build a contingency matrix describing the relationship between labels.
Read more in the :ref:`User Guide <contingency_matrix>`.
Parameters
----------
labels_true : array-like of shape (n_samples,)... | Build a contingency matrix describing the relationship between labels.
Read more in the :ref:`User Guide <contingency_matrix>`.
Parameters
----------
labels_true : array-like of shape (n_samples,)
Ground truth class labels to be used as a reference.
labels_pred : array-like of shape (n_sa... | contingency_matrix | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def homogeneity_completeness_v_measure(labels_true, labels_pred, *, beta=1.0):
"""Compute the homogeneity and completeness and V-Measure scores at once.
Those metrics are based on normalized conditional entropy measures of
the clustering labeling to evaluate given the knowledge of a Ground
Truth class ... | Compute the homogeneity and completeness and V-Measure scores at once.
Those metrics are based on normalized conditional entropy measures of
the clustering labeling to evaluate given the knowledge of a Ground
Truth class labels of the same samples.
A clustering result satisfies homogeneity if all of i... | homogeneity_completeness_v_measure | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def mutual_info_score(labels_true, labels_pred, *, contingency=None):
"""Mutual Information between two clusterings.
The Mutual Information is a measure of the similarity between two labels
of the same data. Where :math:`|U_i|` is the number of the samples
in cluster :math:`U_i` and :math:`|V_j|` is th... | Mutual Information between two clusterings.
The Mutual Information is a measure of the similarity between two labels
of the same data. Where :math:`|U_i|` is the number of the samples
in cluster :math:`U_i` and :math:`|V_j|` is the number of the
samples in cluster :math:`V_j`, the Mutual Information
... | mutual_info_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def normalized_mutual_info_score(
labels_true, labels_pred, *, average_method="arithmetic"
):
"""Normalized Mutual Information between two clusterings.
Normalized Mutual Information (NMI) is a normalization of the Mutual
Information (MI) score to scale the results between 0 (no mutual
information) ... | Normalized Mutual Information between two clusterings.
Normalized Mutual Information (NMI) is a normalization of the Mutual
Information (MI) score to scale the results between 0 (no mutual
information) and 1 (perfect correlation). In this function, mutual
information is normalized by some generalized m... | normalized_mutual_info_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def fowlkes_mallows_score(labels_true, labels_pred, *, sparse="deprecated"):
"""Measure the similarity of two clusterings of a set of points.
.. versionadded:: 0.18
The Fowlkes-Mallows index (FMI) is defined as the geometric mean of
the precision and recall::
FMI = TP / sqrt((TP + FP) * (TP +... | Measure the similarity of two clusterings of a set of points.
.. versionadded:: 0.18
The Fowlkes-Mallows index (FMI) is defined as the geometric mean of
the precision and recall::
FMI = TP / sqrt((TP + FP) * (TP + FN))
Where ``TP`` is the number of **True Positive** (i.e. the number of pairs... | fowlkes_mallows_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def entropy(labels):
"""Calculate the entropy for a labeling.
Parameters
----------
labels : array-like of shape (n_samples,), dtype=int
The labels.
Returns
-------
entropy : float
The entropy for a labeling.
Notes
-----
The logarithm used is the natural logarit... | Calculate the entropy for a labeling.
Parameters
----------
labels : array-like of shape (n_samples,), dtype=int
The labels.
Returns
-------
entropy : float
The entropy for a labeling.
Notes
-----
The logarithm used is the natural logarithm (base-e).
| entropy | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_supervised.py | BSD-3-Clause |
def check_number_of_labels(n_labels, n_samples):
"""Check that number of labels are valid.
Parameters
----------
n_labels : int
Number of labels.
n_samples : int
Number of samples.
"""
if not 1 < n_labels < n_samples:
raise ValueError(
"Number of labels ... | Check that number of labels are valid.
Parameters
----------
n_labels : int
Number of labels.
n_samples : int
Number of samples.
| check_number_of_labels | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_unsupervised.py | BSD-3-Clause |
def silhouette_score(
X, labels, *, metric="euclidean", sample_size=None, random_state=None, **kwds
):
"""Compute the mean Silhouette Coefficient of all samples.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each... | Compute the mean Silhouette Coefficient of all samples.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``. To clarify, ``b`` is the di... | silhouette_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_unsupervised.py | BSD-3-Clause |
def _silhouette_reduce(D_chunk, start, labels, label_freqs):
"""Accumulate silhouette statistics for vertical chunk of X.
Parameters
----------
D_chunk : {array-like, sparse matrix} of shape (n_chunk_samples, n_samples)
Precomputed distances for a chunk. If a sparse matrix is provided,
... | Accumulate silhouette statistics for vertical chunk of X.
Parameters
----------
D_chunk : {array-like, sparse matrix} of shape (n_chunk_samples, n_samples)
Precomputed distances for a chunk. If a sparse matrix is provided,
only CSR format is accepted.
start : int
First index in ... | _silhouette_reduce | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_unsupervised.py | BSD-3-Clause |
def silhouette_samples(X, labels, *, metric="euclidean", **kwds):
"""Compute the Silhouette Coefficient for each sample.
The Silhouette Coefficient is a measure of how well samples are clustered
with samples that are similar to themselves. Clustering models with a high
Silhouette Coefficient are said t... | Compute the Silhouette Coefficient for each sample.
The Silhouette Coefficient is a measure of how well samples are clustered
with samples that are similar to themselves. Clustering models with a high
Silhouette Coefficient are said to be dense, where samples in the same
cluster are similar to each oth... | silhouette_samples | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_unsupervised.py | BSD-3-Clause |
def calinski_harabasz_score(X, labels):
"""Compute the Calinski and Harabasz score.
It is also known as the Variance Ratio Criterion.
The score is defined as ratio of the sum of between-cluster dispersion and
of within-cluster dispersion.
Read more in the :ref:`User Guide <calinski_harabasz_index... | Compute the Calinski and Harabasz score.
It is also known as the Variance Ratio Criterion.
The score is defined as ratio of the sum of between-cluster dispersion and
of within-cluster dispersion.
Read more in the :ref:`User Guide <calinski_harabasz_index>`.
Parameters
----------
X : arra... | calinski_harabasz_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_unsupervised.py | BSD-3-Clause |
def davies_bouldin_score(X, labels):
"""Compute the Davies-Bouldin score.
The score is defined as the average similarity measure of each cluster with
its most similar cluster, where similarity is the ratio of within-cluster
distances to between-cluster distances. Thus, clusters which are farther
ap... | Compute the Davies-Bouldin score.
The score is defined as the average similarity measure of each cluster with
its most similar cluster, where similarity is the ratio of within-cluster
distances to between-cluster distances. Thus, clusters which are farther
apart and less dispersed will result in a bett... | davies_bouldin_score | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/_unsupervised.py | BSD-3-Clause |
def test_consensus_score_issue2445():
"""Different number of biclusters in A and B"""
a_rows = np.array(
[
[True, True, False, False],
[False, False, True, True],
[False, False, False, True],
]
)
a_cols = np.array(
[
[True, True, Fa... | Different number of biclusters in A and B | test_consensus_score_issue2445 | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_bicluster.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_bicluster.py | BSD-3-Clause |
def test_returned_value_consistency(name):
"""Ensure that the returned values of all metrics are consistent.
It can only be a float. It should not be a numpy float64 or float32.
"""
rng = np.random.RandomState(0)
X = rng.randint(10, size=(20, 10))
labels_true = rng.randint(0, 3, size=(20,))
... | Ensure that the returned values of all metrics are consistent.
It can only be a float. It should not be a numpy float64 or float32.
| test_returned_value_consistency | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_common.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_common.py | BSD-3-Clause |
def test_adjusted_rand_score_overflow():
"""Check that large amount of data will not lead to overflow in
`adjusted_rand_score`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/20305
"""
rng = np.random.RandomState(0)
y_true = rng.randint(0, 2, 100_000, dtype=np.i... | Check that large amount of data will not lead to overflow in
`adjusted_rand_score`.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/issues/20305
| test_adjusted_rand_score_overflow | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_supervised.py | BSD-3-Clause |
def test_normalized_mutual_info_score_bounded(average_method):
"""Check that nmi returns a score between 0 (included) and 1 (excluded
for non-perfect match)
Non-regression test for issue #13836
"""
labels1 = [0] * 469
labels2 = [1] + labels1[1:]
labels3 = [0, 1] + labels1[2:]
# labels1... | Check that nmi returns a score between 0 (included) and 1 (excluded
for non-perfect match)
Non-regression test for issue #13836
| test_normalized_mutual_info_score_bounded | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_supervised.py | BSD-3-Clause |
def test_fowlkes_mallows_sparse_deprecated(sparse):
"""Check deprecation warning for 'sparse' parameter of fowlkes_mallows_score."""
with pytest.warns(
FutureWarning, match="The 'sparse' parameter was deprecated in 1.7"
):
fowlkes_mallows_score([0, 1], [1, 1], sparse=sparse) | Check deprecation warning for 'sparse' parameter of fowlkes_mallows_score. | test_fowlkes_mallows_sparse_deprecated | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_supervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_supervised.py | BSD-3-Clause |
def test_silhouette_samples_precomputed_sparse(sparse_container):
"""Check that silhouette_samples works for sparse matrices correctly."""
X = np.array([[0.2, 0.1, 0.1, 0.2, 0.1, 1.6, 0.2, 0.1]], dtype=np.float32).T
y = [0, 0, 0, 0, 1, 1, 1, 1]
pdist_dense = pairwise_distances(X)
pdist_sparse = spar... | Check that silhouette_samples works for sparse matrices correctly. | test_silhouette_samples_precomputed_sparse | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_unsupervised.py | BSD-3-Clause |
def test_silhouette_reduce(sparse_container):
"""Check for non-CSR input to private method `_silhouette_reduce`."""
X = np.array([[0.2, 0.1, 0.1, 0.2, 0.1, 1.6, 0.2, 0.1]], dtype=np.float32).T
pdist_dense = pairwise_distances(X)
pdist_sparse = sparse_container(pdist_dense)
y = [0, 0, 0, 0, 1, 1, 1, ... | Check for non-CSR input to private method `_silhouette_reduce`. | test_silhouette_reduce | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_unsupervised.py | BSD-3-Clause |
def assert_raises_on_only_one_label(func):
"""Assert message when there is only one label"""
rng = np.random.RandomState(seed=0)
with pytest.raises(ValueError, match="Number of labels is"):
func(rng.rand(10, 2), np.zeros(10)) | Assert message when there is only one label | assert_raises_on_only_one_label | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_unsupervised.py | BSD-3-Clause |
def assert_raises_on_all_points_same_cluster(func):
"""Assert message when all point are in different clusters"""
rng = np.random.RandomState(seed=0)
with pytest.raises(ValueError, match="Number of labels is"):
func(rng.rand(10, 2), np.arange(10)) | Assert message when all point are in different clusters | assert_raises_on_all_points_same_cluster | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_unsupervised.py | BSD-3-Clause |
def test_silhouette_score_integer_precomputed():
"""Check that silhouette_score works for precomputed metrics that are integers.
Non-regression test for #22107.
"""
result = silhouette_score(
[[0, 1, 2], [1, 0, 1], [2, 1, 0]], [0, 0, 1], metric="precomputed"
)
assert result == pytest.ap... | Check that silhouette_score works for precomputed metrics that are integers.
Non-regression test for #22107.
| test_silhouette_score_integer_precomputed | python | scikit-learn/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/cluster/tests/test_unsupervised.py | BSD-3-Clause |
def make_prediction(dataset=None, binary=False):
"""Make some classification predictions on a toy dataset using a SVC
If binary is True restrict to a binary classification problem instead of a
multiclass classification problem
"""
if dataset is None:
# import some data to play with
... | Make some classification predictions on a toy dataset using a SVC
If binary is True restrict to a binary classification problem instead of a
multiclass classification problem
| make_prediction | python | scikit-learn/scikit-learn | sklearn/metrics/tests/test_classification.py | https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/tests/test_classification.py | BSD-3-Clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.