|
|
--- |
|
|
license: mit |
|
|
base_model: |
|
|
- deepset/gbert-large |
|
|
--- |
|
|
|
|
|
# Gbert QLoRA – Grounding Act Classification |
|
|
|
|
|
This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large), optimized using QLoRA for efficient binary classification of German dialogue utterances into: |
|
|
|
|
|
- `ADVANCE`: Contribution that moves the dialogue forward (e.g. confirmations, follow-ups, elaborations) |
|
|
- `NON-ADVANCE`: Other utterances (e.g. vague responses, misunderstandings, irrelevant comments) |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
- Dialogue system analysis |
|
|
- Teacher-student interaction classification |
|
|
- Grounding in institutional advising or classroom discourse |
|
|
|
|
|
|
|
|
|
|
|
## How to Use |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
|
|
model = AutoModelForSequenceClassification.from_pretrained("MB55/gbert-lora-final") |
|
|
tokenizer = AutoTokenizer.from_pretrained("MB55/gbert-lora-final") |
|
|
|
|
|
text = "Bitte erläutern Sie das noch einmal." |
|
|
inputs = tokenizer(text, return_tensors="pt") |
|
|
outputs = model(**inputs) |
|
|
|
|
|
predicted_class = outputs.logits.argmax(dim=-1).item() |
|
|
print(predicted_class) |