Spaces:
Runtime error
Runtime error
A newer version of the Gradio SDK is available: 6.14.0
metadata
title: Precision Recall Fscore Accuracy
tags:
- evaluate
- metric
colorFrom: gray
colorTo: green
description: >-
This metric calculates precision, recall, accuracy, and fscore for
classification tasks using scikit-learn.
sdk: gradio
sdk_version: 5.23.1
app_file: app.py
pinned: false
datasets: []
Metric Card for Precision Recall Accuracy Fscore
This metric calculates precision, recall, accuracy, and fscore for classification tasks using scikit-learn.
How to Use
>>> predictions = [0, 1, 0, 1]
>>> references = [1, 1, 0, 0]
>>> metric = evaluate.load("precision_recall_fscore_accuracy", average="binary")
>>> metric.compute(predictions=predictions, references=references)
{'precision': 0.5, 'recall': 0.5, 'fscore': 0.5, 'accuracy': 0.5}
Inputs
- predictions (List of int|str): List of predicted labels.
- references (List of int|str): List of true labels.
Outputs
Dictionary containing the following metrics:
- precision (float): Precision score.
- recall (float): Recall score.
- fscore (float): F1 score.
- accuracy (float): Accuracy score.
Citation
@article{scikit-learn,
title={Scikit-learn: Machine Learning in {P}ython},
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal={Journal of Machine Learning Research},
volume={12},
pages={2825--2830},
year={2011}
}