BERT Base (uncased) fine-tuned on Argument Quality Ranking
This model is a fine-tuned version of bert-base-uncased on the IBM Argument Quality Ranking dataset. It predicts the ratings of arguments as a integer between 1 and 5.
Model Details
- Model type: BERT (base, uncased)
- Fine-tuned on: IBM Argument Quality Ranking (~30k arguments)
- Task: Regression (argument quality score)
- Output: Integer between 1-5
- Training framework: 🤗 Transformers
Training
- Epochs: 3
- Batch size: 16
- Learning rate: 2e-5
- Optimizer: AdamW
- Evaluation metrics: Mean Squared Error (MSE), Mean Absolute Error (MAE)
Evaluation Results
On the test set:
| Metric | Value |
|---|---|
| MSE | 0.0404 |
| MAE | 0.1499 |
How to Use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
model_name = "ByteMeHarder-404/bert-base-uncased-finetuned-arg-quality"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
def predict_quality(arguments):
inputs = tokenizer(arguments, truncation=True, padding=True, return_tensors="pt")
device = next(model.parameters()).device
inputs = {k: v.to(device) for k, v in inputs.items()}
model.eval()
with torch.no_grad():
outputs = model(**inputs)
preds = round(outputs.logits.squeeze().cpu().numpy()*4+1)
return preds
# Example
args = [
"School uniforms reduce individuality.",
"World Peace is great",
"Homework improves student learning outcomes."
]
print("Ratings:", predict_rating(args)) # Output: 1–5 ratings
- Downloads last month
- 19
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support