Chi Honolulu
commited on
Commit
·
fe8faca
1
Parent(s):
234ae08
Upload README.md
Browse files
README.md
CHANGED
|
@@ -11,8 +11,6 @@ language:
|
|
| 11 |
|
| 12 |
This model is fine-tuned for binary text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech.
|
| 13 |
|
| 14 |
-
## Model Details
|
| 15 |
-
|
| 16 |
### Model Description
|
| 17 |
|
| 18 |
The model was fine-tuned on a Czech dataset of Instant Messenger dialogs of Adolescents. The classification is binary and the model outputs probablities for labels {0,1}: Supportive Interactions present or not.
|
|
@@ -33,21 +31,24 @@ Here is how to use this model to classify a context-window of a dialogue:
|
|
| 33 |
|
| 34 |
```python
|
| 35 |
import numpy as np
|
| 36 |
-
from transformers import AutoTokenizer,
|
| 37 |
|
| 38 |
# Prepare input texts. This model is pretrained and fine-tuned for Czech
|
| 39 |
test_texts = ['Utterance1;Utterance2;Utterance3']
|
| 40 |
|
| 41 |
-
|
| 42 |
# Load the model and tokenizer
|
| 43 |
-
model =
|
| 44 |
-
|
| 45 |
-
|
|
|
|
|
|
|
|
|
|
| 46 |
assert tokenizer.truncation_side == 'left'
|
| 47 |
|
| 48 |
# Define helper functions
|
| 49 |
def get_probs(text, tokenizer, model):
|
| 50 |
-
inputs = tokenizer(text, padding=True, truncation=True, max_length=256,
|
|
|
|
| 51 |
outputs = model(**inputs)
|
| 52 |
return outputs[0].softmax(1)
|
| 53 |
|
|
@@ -57,8 +58,9 @@ def preds2class(probs, threshold=0.5):
|
|
| 57 |
return pclasses.argmax(-1)
|
| 58 |
|
| 59 |
def print_predictions(texts):
|
| 60 |
-
probabilities = [get_probs(
|
| 61 |
-
|
|
|
|
| 62 |
predicted_classes = preds2class(np.array(probabilities))
|
| 63 |
for c, p in zip(predicted_classes, probabilities):
|
| 64 |
print(f'{c}: {p}')
|
|
|
|
| 11 |
|
| 12 |
This model is fine-tuned for binary text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech.
|
| 13 |
|
|
|
|
|
|
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
The model was fine-tuned on a Czech dataset of Instant Messenger dialogs of Adolescents. The classification is binary and the model outputs probablities for labels {0,1}: Supportive Interactions present or not.
|
|
|
|
| 31 |
|
| 32 |
```python
|
| 33 |
import numpy as np
|
| 34 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 35 |
|
| 36 |
# Prepare input texts. This model is pretrained and fine-tuned for Czech
|
| 37 |
test_texts = ['Utterance1;Utterance2;Utterance3']
|
| 38 |
|
|
|
|
| 39 |
# Load the model and tokenizer
|
| 40 |
+
model = AutoModelForSequenceClassification.from_pretrained(
|
| 41 |
+
'chi2024/robeczech-base-binary-cs-iib', num_labels=2).to("cuda")
|
| 42 |
+
|
| 43 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
| 44 |
+
'chi2024/robeczech-base-binary-cs-iib',
|
| 45 |
+
use_fast=False, truncation_side='left')
|
| 46 |
assert tokenizer.truncation_side == 'left'
|
| 47 |
|
| 48 |
# Define helper functions
|
| 49 |
def get_probs(text, tokenizer, model):
|
| 50 |
+
inputs = tokenizer(text, padding=True, truncation=True, max_length=256,
|
| 51 |
+
return_tensors="pt").to("cuda")
|
| 52 |
outputs = model(**inputs)
|
| 53 |
return outputs[0].softmax(1)
|
| 54 |
|
|
|
|
| 58 |
return pclasses.argmax(-1)
|
| 59 |
|
| 60 |
def print_predictions(texts):
|
| 61 |
+
probabilities = [get_probs(
|
| 62 |
+
texts[i], tokenizer, model).cpu().detach().numpy()[0]
|
| 63 |
+
for i in range(len(texts))]
|
| 64 |
predicted_classes = preds2class(np.array(probabilities))
|
| 65 |
for c, p in zip(predicted_classes, probabilities):
|
| 66 |
print(f'{c}: {p}')
|