ModernBERT Fine-tuned for Binary Classification
This model is a fine-tuned version of answerdotai/ModernBERT-base for binary classification.
Training Details
- Training epochs: 3
- Batch size: 16
- Learning rate: 2e-05
- Training samples: 800
- Validation samples: 200
Evaluation Results
- Accuracy: 0.4400
- Precision: 0.9333
- Recall: 0.4421
- F1 Score: 0.6000
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("KamilHugsFaces/clay-modernbert-v1")
model = AutoModelForSequenceClassification.from_pretrained("KamilHugsFaces/clay-modernbert-v1")
text = "Your text here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=-1).item()
probabilities = torch.softmax(outputs.logits, dim=-1)[0]
print(f"Prediction: {prediction}")
print(f"Confidence: {probabilities[prediction]:.4f}")
- Downloads last month
- 19