Update README.md
Browse files
README.md
CHANGED
|
@@ -1,62 +1,77 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
-
-
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
-
should probably proofread and complete it, then remove this comment. -->
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
-
It achieves the following results on the evaluation set:
|
| 19 |
-
- Loss: 0.0404
|
| 20 |
-
- Mse: 0.0404
|
| 21 |
-
- Mae: 0.1499
|
| 22 |
-
|
| 23 |
-
## Model description
|
| 24 |
-
|
| 25 |
-
More information needed
|
| 26 |
-
|
| 27 |
-
## Intended uses & limitations
|
| 28 |
-
|
| 29 |
-
More information needed
|
| 30 |
-
|
| 31 |
-
## Training and evaluation data
|
| 32 |
-
|
| 33 |
-
More information needed
|
| 34 |
|
| 35 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
| 40 |
-
-
|
| 41 |
-
-
|
| 42 |
-
-
|
| 43 |
-
-
|
| 44 |
-
-
|
| 45 |
-
- lr_scheduler_type: linear
|
| 46 |
-
- num_epochs: 3
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
| 0.0078 | 1.0 | 1311 | 0.0420 | 0.0420 | 0.1557 |
|
| 53 |
-
| 0.0154 | 2.0 | 2622 | 0.0368 | 0.0368 | 0.1420 |
|
| 54 |
-
| 0.0098 | 3.0 | 3933 | 0.0378 | 0.0378 | 0.1441 |
|
| 55 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
-
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language: en
|
| 3 |
+
datasets:
|
| 4 |
+
- ibm-research/argument_quality_ranking_30k
|
| 5 |
+
metrics:
|
| 6 |
+
- mean-squared-error
|
| 7 |
+
- mean-absolute-error
|
| 8 |
+
model-name: bert-base-uncased-finetuned-arg-quality
|
| 9 |
tags:
|
| 10 |
+
- regression
|
| 11 |
+
- argument-quality
|
| 12 |
+
- bert
|
| 13 |
+
- fine-tuned
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# BERT Base (uncased) fine-tuned on Argument Quality Ranking
|
|
|
|
| 17 |
|
| 18 |
+
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the **IBM Argument Quality Ranking** dataset. It predicts the **quality of arguments** as a score between 0 and 1. You can also convert the score to a **1–5 rating**.
|
| 19 |
|
| 20 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
+
## Model Details
|
| 23 |
+
- **Model type**: BERT (base, uncased)
|
| 24 |
+
- **Fine-tuned on**: IBM Argument Quality Ranking (~30k arguments)
|
| 25 |
+
- **Task**: Regression (argument quality score)
|
| 26 |
+
- **Output**: Integer between 1-5
|
| 27 |
+
- **Training framework**: [🤗 Transformers](https://github.com/huggingface/transformers)
|
| 28 |
|
| 29 |
+
---
|
| 30 |
|
| 31 |
+
## Training
|
| 32 |
+
- Epochs: 3
|
| 33 |
+
- Batch size: 16
|
| 34 |
+
- Learning rate: 2e-5
|
| 35 |
+
- Optimizer: AdamW
|
| 36 |
+
- Evaluation metrics: Mean Squared Error (MSE), Mean Absolute Error (MAE)
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
---
|
| 39 |
|
| 40 |
+
## Evaluation Results
|
| 41 |
+
On the test set:
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
| Metric | Value |
|
| 44 |
+
|--------|-------|
|
| 45 |
+
| MSE | 0.0404 |
|
| 46 |
+
| MAE | 0.1499 |
|
| 47 |
|
| 48 |
+
---
|
| 49 |
|
| 50 |
+
## How to Use
|
| 51 |
+
```python
|
| 52 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 53 |
+
import torch
|
| 54 |
+
import numpy as np
|
| 55 |
+
|
| 56 |
+
model_name = "ByteMeHarder-404/bert-base-uncased-finetuned-arg-quality"
|
| 57 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 58 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
| 59 |
+
|
| 60 |
+
def predict_quality(arguments):
|
| 61 |
+
inputs = tokenizer(arguments, truncation=True, padding=True, return_tensors="pt")
|
| 62 |
+
device = next(model.parameters()).device
|
| 63 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
| 64 |
+
model.eval()
|
| 65 |
+
with torch.no_grad():
|
| 66 |
+
outputs = model(**inputs)
|
| 67 |
+
preds = round(outputs.logits.squeeze().cpu().numpy()*4+1)
|
| 68 |
+
return preds
|
| 69 |
+
|
| 70 |
+
# Example
|
| 71 |
+
args = [
|
| 72 |
+
"School uniforms reduce individuality.",
|
| 73 |
+
"World Peace is great",
|
| 74 |
+
"Homework improves student learning outcomes."
|
| 75 |
+
]
|
| 76 |
+
|
| 77 |
+
print("Ratings:", predict_rating(args)) # Output: 1–5 ratings
|