Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ size_categories:
|
|
| 13 |
|
| 14 |
# EmoBench-UA: Emotions Detection Dataset in Ukrainian Texts
|
| 15 |
|
| 16 |
-
<img alt="EmoBench-UA" src="intro_logo.png">
|
| 17 |
|
| 18 |
**EmoBench-UA**: the first of its kind emotions detection dataset in Ukrainian texts. This dataset covers the detection of basic emotions: Joy, Anger, Fear, Disgust, Surprise, Sadness, or None.
|
| 19 |
Any text can contain any amount of emotion -- only one, several, or none at all. The texts with *None* emotions are the ones where the labels per emotions classes are 0.
|
|
@@ -21,14 +21,33 @@ Any text can contain any amount of emotion -- only one, several, or none at all.
|
|
| 21 |
*Binary*: specifically this dataset contains binary labels indicating simply presence of any emotion in the text.
|
| 22 |
|
| 23 |
## Data Collection
|
| 24 |
-
The data collection was done via [Toloka.ai](https://toloka.ai}{https://toloka.ai) crowdsourcing platform.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
## Splits Statistics
|
| 27 |
|
| 28 |
Overall, the datasets contains of 4949 labelled instances. Krippendorff's alpha annotation agreement score is 0.85.
|
| 29 |
Then, we partitioned the dataset into fixed train/development/test subsets following a 50/5/45% split ratio.
|
| 30 |
|
| 31 |
-
<img alt="Splits Statistic" src="data_stats.png">
|
| 32 |
|
| 33 |
## Citation
|
| 34 |
|
|
|
|
| 13 |
|
| 14 |
# EmoBench-UA: Emotions Detection Dataset in Ukrainian Texts
|
| 15 |
|
| 16 |
+
<img alt="EmoBench-UA" style="width: 50%; height: 50%" src="intro_logo.png">
|
| 17 |
|
| 18 |
**EmoBench-UA**: the first of its kind emotions detection dataset in Ukrainian texts. This dataset covers the detection of basic emotions: Joy, Anger, Fear, Disgust, Surprise, Sadness, or None.
|
| 19 |
Any text can contain any amount of emotion -- only one, several, or none at all. The texts with *None* emotions are the ones where the labels per emotions classes are 0.
|
|
|
|
| 21 |
*Binary*: specifically this dataset contains binary labels indicating simply presence of any emotion in the text.
|
| 22 |
|
| 23 |
## Data Collection
|
| 24 |
+
The data collection was done via [Toloka.ai](https://toloka.ai}{https://toloka.ai) crowdsourcing platform.
|
| 25 |
+
For original Ukrainian texts, we used the opensourced [corpus of Ukrainian tweets](https://github.com/kateryna-bobrovnyk/ukr-twi-corpus).
|
| 26 |
+
|
| 27 |
+
Firstly, we did data pre-filtering:
|
| 28 |
+
|
| 29 |
+
**Length** We applied a length-based filter, discarding texts that were too short (N words < 5), as such samples often consist of hashtags or other non-informative tokens. Similarly, overly long texts (N words >= 50) were excluded, as longer sequences tend to obscure the central meaning and make it challenging to accurately identify the expressed emotions.
|
| 30 |
+
|
| 31 |
+
**Toxicity** While toxic texts can carry quite strong emotions, to ensure annotators well-being and general appropriateness of our corpus, we filtered out too toxic instances using our opensourced [toxicity classifier](https://huggingface.co/ukr-detect/ukr-toxicity-classifier).
|
| 32 |
+
|
| 33 |
+
**Emotional Texts Pre-selection** To avoid an excessive imbalance toward emotionless texts, we performed a pre-selection step aimed at identifying texts likely to express emotions. Specifically, we applied the English emotion classifier [DistillRoBERTa-Emo-EN](https://huggingface.co/michellejieli/emotion_text_classifier) on translated Ukrainian texts with [NLLB](https://huggingface.co/facebook/nllb-200-distilled-600M) model.
|
| 34 |
+
|
| 35 |
+
Then, we utilized several control strategies to ensure the quality of the data:
|
| 36 |
+
* annotators were native Ukrainian speakers;
|
| 37 |
+
* annotators were obliged to finish training and examination before being allowed to perform annotation;
|
| 38 |
+
* annotators were permanently banned if they submitted the last three task pages in under 15 seconds each, indicating low engagement;
|
| 39 |
+
* a one-day ban was triggered if three consecutive pages were skipped;
|
| 40 |
+
* annotators were asked to take a 30-minute break after completing 25 consecutive pages;
|
| 41 |
+
* control tasks were randomly injected to check.
|
| 42 |
+
|
| 43 |
+
Finally, each sample were annotated by 5 annotators. We took for datasets only instances with 90% confidence score.
|
| 44 |
|
| 45 |
## Splits Statistics
|
| 46 |
|
| 47 |
Overall, the datasets contains of 4949 labelled instances. Krippendorff's alpha annotation agreement score is 0.85.
|
| 48 |
Then, we partitioned the dataset into fixed train/development/test subsets following a 50/5/45% split ratio.
|
| 49 |
|
| 50 |
+
<img alt="Splits Statistic" style="width: 50%; height: 50%" src="data_stats.png">
|
| 51 |
|
| 52 |
## Citation
|
| 53 |
|