tcepi/mbp_pas_dataset
Viewer • Updated • 1.43k • 4
How to use tcepi/mbp_pas_model with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="tcepi/mbp_pas_model") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("tcepi/mbp_pas_model")
model = AutoModelForSequenceClassification.from_pretrained("tcepi/mbp_pas_model")Este modelo é um fine-tune do ModernBERT-base para classificação binária, treinado no dataset tcepi/mbp_pas_dataset.
| Métrica | Valor |
|---|---|
| Accuracy | 0.9861 |
| F1-Score | 0.9863 |
| Precision | 0.9796 |
| Recall | 0.9931 |
| ROC-AUC | 0.9988 |
| Specificity | 0.9789 |
| Predito Negativo | Predito Positivo | |
|---|---|---|
| Real Negativo | 139 (TN) | 3 (FP) |
| Real Positivo | 1 (FN) | 144 (TP) |
precision recall f1-score support
Negativo 0.9929 0.9789 0.9858 142
Positivo 0.9796 0.9931 0.9863 145
accuracy 0.9861 287
macro avg 0.9862 0.9860 0.9861 287
weighted avg 0.9862 0.9861 0.9861 287
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Carregar modelo e tokenizer
tokenizer = AutoTokenizer.from_pretrained("tcepi/mbp_pas_model")
model = AutoModelForSequenceClassification.from_pretrained("tcepi/mbp_pas_model")
# Classificar texto
text = "Seu texto aqui"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(predictions, dim=-1).item()
print(f"Classe predita: {model.config.id2label[predicted_class]}")
print(f"Probabilidades: {predictions.tolist()}")
O modelo foi treinado usando o dataset tcepi/mbp_pas_dataset.
Base model
answerdotai/ModernBERT-base