| | --- |
| | |
| | |
| | license: mit |
| | language: |
| | - multilingual |
| | --- |
| | # Model Card for mt5-base-multi-label-all-cs-iv |
| |
|
| | <!-- Provide a quick summary of what the model is/does. --> |
| |
|
| | This model is fine-tuned for multi-label seq2seq text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents. |
| |
|
| | ## Model Description |
| |
|
| | The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is multi-label. For each of the utterances in the input, the model outputs any combination of the tags:'NO TAG', 'Informační podpora', 'Emocionální podpora', 'Začlenění do skupiny', 'Uznání', 'Nabídka pomoci': as a string joined with ', ' (ordered alphabetically). Each label indicates the presence of that category of Supportive Interactions: 'no tag', 'informational support', 'emocional support', 'social companionship', 'appraisal', 'instrumental support' in each of the utterances of the input. The inputs of the model is a sequence of utterances joined with ';'. The outputs are a sequence of per-utterance labels such as: 'NO TAG; Informační podpora, Uznání; NO TAG' |
| |
|
| | - **Developed by:** Anonymous |
| | - **Language(s):** multilingual |
| | - **Finetuned from:** mt5-base |
| |
|
| | ## Model Sources |
| |
|
| | <!-- Provide the basic links for the model. --> |
| |
|
| | - **Repository:** https://github.com/chi2024submission |
| | - **Paper:** Stay tuned! |
| |
|
| | ## Usage |
| | Here is how to use this model to classify a context-window of a dialogue: |
| |
|
| | ```python |
| | import itertools |
| | from transformers import AutoModelForSeq2SeqLM, AutoTokenizer |
| | import torch |
| | |
| | # Target dialog context window |
| | test_texts = ['Utterance1;Utterance2;Utterance3'] |
| | |
| | # Load the model and tokenizer |
| | checkpoint_path = "chi2024/mt5-base-multi-label-all-cs-iv" |
| | model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ |
| | .to("cuda" if torch.cuda.is_available() else "cpu") |
| | tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) |
| | |
| | # Define helper functions |
| | def predict_one(text): |
| | inputs = tokenizer(text, return_tensors="pt", padding=True, |
| | truncation=True, max_length=256).to(model.device) |
| | outputs = model.generate(**inputs) |
| | decoded = [text.split(",")[0].strip() for text in |
| | tokenizer.batch_decode(outputs, skip_special_tokens=True)] |
| | predicted_sequence = list( |
| | itertools.chain(*(pred_one.split("; ") for pred_one in decoded))) |
| | return predicted_sequence |
| | |
| | # Run the prediction |
| | dec = predict_one(test_texts[0]) |
| | print(dec) |
| | ``` |