Add README
Browse files
README.md
CHANGED
|
@@ -1,3 +1,68 @@
|
|
| 1 |
-
---
|
| 2 |
-
license:
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- roberta
|
| 7 |
+
- text-classification
|
| 8 |
+
- ensemble
|
| 9 |
+
- clarity
|
| 10 |
+
- qevasion
|
| 11 |
+
pipeline_tag: text-classification
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# RoBERTa Clarity Ensemble
|
| 15 |
+
|
| 16 |
+
This repository contains **3 RoBERTa-large models** fine-tuned for clarity classification (Clear Reply / Clear Non-Reply / Ambivalent).
|
| 17 |
+
|
| 18 |
+
## Models
|
| 19 |
+
|
| 20 |
+
| Model | Description |
|
| 21 |
+
|-------|-------------|
|
| 22 |
+
| `model-1/` | RoBERTa-large fine-tuned on clarity task |
|
| 23 |
+
| `model-2/` | RoBERTa-large fine-tuned on clarity task (different seed/split) |
|
| 24 |
+
| `model-3/` | RoBERTa-large fine-tuned on clarity task (different seed/split) |
|
| 25 |
+
|
| 26 |
+
## Usage
|
| 27 |
+
|
| 28 |
+
```python
|
| 29 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
| 30 |
+
import torch
|
| 31 |
+
|
| 32 |
+
# Load one model
|
| 33 |
+
model = AutoModelForSequenceClassification.from_pretrained("gigibot/ensemble-qeval", subfolder="model-1")
|
| 34 |
+
tokenizer = AutoTokenizer.from_pretrained("gigibot/ensemble-qeval", subfolder="model-1")
|
| 35 |
+
|
| 36 |
+
# Or load all 3 for ensemble voting
|
| 37 |
+
models = []
|
| 38 |
+
for i in [1, 2, 3]:
|
| 39 |
+
m = AutoModelForSequenceClassification.from_pretrained(
|
| 40 |
+
"gigibot/ensemble-qeval",
|
| 41 |
+
subfolder=f"model-{{i}}"
|
| 42 |
+
)
|
| 43 |
+
models.append(m)
|
| 44 |
+
|
| 45 |
+
# Ensemble inference
|
| 46 |
+
def ensemble_predict(text, models, tokenizer):
|
| 47 |
+
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=256)
|
| 48 |
+
logits_sum = None
|
| 49 |
+
for model in models:
|
| 50 |
+
model.eval()
|
| 51 |
+
with torch.no_grad():
|
| 52 |
+
out = model(**inputs)
|
| 53 |
+
if logits_sum is None:
|
| 54 |
+
logits_sum = out.logits
|
| 55 |
+
else:
|
| 56 |
+
logits_sum += out.logits
|
| 57 |
+
return torch.argmax(logits_sum, dim=-1).item()
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## Labels
|
| 61 |
+
|
| 62 |
+
- 0: Clear Reply
|
| 63 |
+
- 1: Clear Non-Reply
|
| 64 |
+
- 2: Ambivalent
|
| 65 |
+
|
| 66 |
+
## Training
|
| 67 |
+
|
| 68 |
+
Each model was fine-tuned from `roberta-large` on the QEvasion clarity dataset.
|