File size: 2,062 Bytes
9c061fb
61e8c2b
 
 
 
 
 
 
9c061fb
61e8c2b
 
 
 
9c061fb
 
61e8c2b
9c061fb
490ab1f
9c061fb
61e8c2b
9c061fb
61e8c2b
 
 
 
 
 
9c061fb
61e8c2b
9c061fb
61e8c2b
 
 
 
 
 
9c061fb
61e8c2b
9c061fb
61e8c2b
 
9c061fb
61e8c2b
 
 
 
9c061fb
61e8c2b
9c061fb
61e8c2b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
language: en
datasets:
- ibm-research/argument_quality_ranking_30k
metrics:
- mean-squared-error
- mean-absolute-error
model-name: bert-base-uncased-finetuned-arg-quality
tags:
- regression
- argument-quality
- bert
- fine-tuned
---

# BERT Base (uncased) fine-tuned on Argument Quality Ranking

This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the **IBM Argument Quality Ranking** dataset. It predicts the **ratings of arguments** as a integer between **1** and **5**.

---

## Model Details
- **Model type**: BERT (base, uncased)  
- **Fine-tuned on**: IBM Argument Quality Ranking (~30k arguments)  
- **Task**: Regression (argument quality score)  
- **Output**: Integer between 1-5 
- **Training framework**: [🤗 Transformers](https://github.com/huggingface/transformers)  

---

## Training
- Epochs: 3  
- Batch size: 16  
- Learning rate: 2e-5  
- Optimizer: AdamW  
- Evaluation metrics: Mean Squared Error (MSE), Mean Absolute Error (MAE)

---

## Evaluation Results
On the test set:

| Metric | Value |
|--------|-------|
| MSE    | 0.0404 |
| MAE    | 0.1499 |

---

## How to Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np

model_name = "ByteMeHarder-404/bert-base-uncased-finetuned-arg-quality"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

def predict_quality(arguments):
    inputs = tokenizer(arguments, truncation=True, padding=True, return_tensors="pt")
    device = next(model.parameters()).device
    inputs = {k: v.to(device) for k, v in inputs.items()}
    model.eval()
    with torch.no_grad():
        outputs = model(**inputs)
        preds = round(outputs.logits.squeeze().cpu().numpy()*4+1)
    return preds

# Example
args = [
    "School uniforms reduce individuality.",
    "World Peace is great",
    "Homework improves student learning outcomes."
]

print("Ratings:", predict_rating(args))  # Output: 1–5 ratings