Instructions to use LEIA/LEIA-multilingual with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LEIA/LEIA-multilingual with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="LEIA/LEIA-multilingual")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LEIA/LEIA-multilingual") model = AutoModelForSequenceClassification.from_pretrained("LEIA/LEIA-multilingual") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -17,8 +17,8 @@ The fine-tuning dataset is a subset of the self-labeled emotion dataset (Lykousa
|
|
| 17 |
See the paper, [LEIA: Linguistic Embeddings for the Identification of Affect](https://doi.org/10.1140/epjds/s13688-023-00427-0) for further details.
|
| 18 |
|
| 19 |
## Evaluation
|
| 20 |
-
We evaluated LEIA-multilingual on
|
| 21 |
-
The table below shows the macro-F1 scores across emotion categories
|
| 22 |
|
| 23 |
|
| 24 |
|
|
|
|
| 17 |
See the paper, [LEIA: Linguistic Embeddings for the Identification of Affect](https://doi.org/10.1140/epjds/s13688-023-00427-0) for further details.
|
| 18 |
|
| 19 |
## Evaluation
|
| 20 |
+
We evaluated LEIA-multilingual on posts with self-annotated emotion labels identified as non-English using an ensemble of language identification tools.
|
| 21 |
+
The table below shows the macro-F1 scores aggregated across emotion categories for each language:
|
| 22 |
|
| 23 |
|
| 24 |
|