File size: 1,107 Bytes
3585e50 02dddb7 8400a9f 02dddb7 8400a9f c3d92e2 8400a9f 02dddb7 ecf6fbb 02dddb7 ecf6fbb 02dddb7 ecf6fbb 02dddb7 ecf6fbb 02dddb7 ecf6fbb 3585e50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
license: mit
base_model:
- deepset/gbert-large
---
# Gbert QLoRA – Grounding Act Classification
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large), optimized using QLoRA for efficient binary classification of German dialogue utterances into:
- `ADVANCE`: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations)
- `NON-ADVANCE`: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments)
## Use Cases
- Dialogue system analysis
- Teacher-student interaction classification
- Grounding in institutional advising or classroom discourse
## How to Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("MB55/gbert-lora-final")
tokenizer = AutoTokenizer.from_pretrained("MB55/gbert-lora-final")
text = "Bitte erläutern Sie das noch einmal."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predicted_class = outputs.logits.argmax(dim=-1).item()
print(predicted_class) |