Spaces:
Build error
Build error
| title: classification_report | |
| tags: | |
| - evaluate | |
| - metric | |
| description: >- | |
| Build a text report showing the main classification metrics that are accuracy, precision, recall and F1. | |
| sdk: gradio | |
| sdk_version: 3.14.0 | |
| app_file: app.py | |
| pinned: false | |
| license: apache-2.0 | |
| # Metric Card for classification_report | |
| ## Metric Description | |
| Build a text report showing the main classification metrics that are accuracy, precision, recall and F1. | |
| ## How to Use | |
| At minimum, this metric requires predictions and references as inputs. | |
| ```python | |
| >>> classification_report_metric = evaluate.load("bstrai/classification_report") | |
| >>> results = classification_report_metric.compute(references=[0, 1], predictions=[0, 1]) | |
| >>> print(results) | |
| {'0': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, '1': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, 'accuracy': 1.0, 'macro avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}, 'weighted avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}} | |
| ``` | |
| ### Inputs | |
| - **predictions** (`list` of `int`): Predicted labels. | |
| - **references** (`list` of `int`): Ground truth labels. | |
| - **labels** (`list` of `int`): Optional list of label indices to include in the report. Defaults to None. | |
| - **target_names** (`list` of `str`): Optional display names matching the labels (same order). Defaults to None. | |
| - **sample_weight** (`list` of `float`): Sample weights. Defaults to None. | |
| - **digits** (`int`): Number of digits for formatting output floating point values. When output_dict is True, this will be ignored and the returned values will not be rounded. Defaults to 2. | |
| - **zero_division** (`warn`, `0` or `1`): Sets the value to return when there is a zero division. If set to “warn”, this acts as 0, but warnings are also raised. Defaults to `warn`. | |
| ### Output Values | |
| - report (`str` or `dict`): Text summary of the precision, recall, F1 score for each class. Dictionary returned if output_dict is True. Dictionary has the following structure: | |
| ``` | |
| {'label 1': {'precision':0.5, | |
| 'recall':1.0, | |
| 'f1-score':0.67, | |
| 'support':1}, | |
| 'label 2': { ... }, | |
| ... | |
| } | |
| ``` | |
| The reported averages include macro average (averaging the unweighted mean per label), weighted average (averaging the support-weighted mean per label), and sample average (only for multilabel classification). Micro average (averaging the total true positives, false negatives and false positives) is only shown for multi-label or multi-class with a subset of classes, because it corresponds to accuracy otherwise and would be the same for all metrics. See also precision_recall_fscore_support for more details on averages. | |
| Note that in binary classification, recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”. | |
| Output Example(s): | |
| ```python | |
| {'0': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, '1': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, 'accuracy': 1.0, 'macro avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}, 'weighted avg': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 2}} | |
| ``` | |
| ### Examples | |
| Simple Example: | |
| ```python | |
| >>> classification_report_metric = evaluate.load("bstrai/classification_report") | |
| >>> results = classification_report_metric.compute(references=[0, 1, 2, 0, 1, 2], predictions=[0, 1, 1, 2, 1, 0]) | |
| >>> print(results) | |
| {'0': {'precision': 0.5, 'recall': 0.5, 'f1-score': 0.5, 'support': 2}, '1': {'precision': 0.6666666666666666, 'recall': 1.0, 'f1-score': 0.8, 'support': 2}, '2': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 2}, 'accuracy': 0.5, 'macro avg': {'precision': 0.38888888888888884, 'recall': 0.5, 'f1-score': 0.43333333333333335, 'support': 6}, 'weighted avg': {'precision': 0.38888888888888884, 'recall': 0.5, 'f1-score': 0.43333333333333335, 'support': 6}} | |
| ``` | |
| ## Citation | |
| ```bibtex | |
| @article{scikit-learn, | |
| title={Scikit-learn: Machine Learning in {P}ython}, | |
| author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. | |
| and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. | |
| and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and | |
| Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, | |
| journal={Journal of Machine Learning Research}, | |
| volume={12}, | |
| pages={2825--2830}, | |
| year={2011} | |
| } | |
| ``` | |
| ## Further References | |