Add sections
#3
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -301,6 +301,62 @@ LlamaLens is a specialized multilingual LLM designed for analyzing news and soci
|
|
| 301 |
## Dataset Details
|
| 302 |
This dataset comprises various sub-datasets focusing on different text classification tasks related to news and social media analysis. A detailed breakdown of the datasets and their statistics is provided in the metadata section above.
|
| 303 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 304 |
## File Format
|
| 305 |
|
| 306 |
Each JSONL file in the dataset follows a structured format with the following fields:
|
|
|
|
| 301 |
## Dataset Details
|
| 302 |
This dataset comprises various sub-datasets focusing on different text classification tasks related to news and social media analysis. A detailed breakdown of the datasets and their statistics is provided in the metadata section above.
|
| 303 |
|
| 304 |
+
### Features
|
| 305 |
+
- Multilingual support (Arabic, English, Hindi)
|
| 306 |
+
- 18 NLP tasks with 52 datasets
|
| 307 |
+
- Optimized for news and social media content analysis
|
| 308 |
+
|
| 309 |
+
## 📂 Dataset Overview
|
| 310 |
+
|
| 311 |
+
### English Datasets
|
| 312 |
+
|
| 313 |
+
| **Task** | **Dataset** | **# Labels** | **# Train** | **# Test** | **# Dev** |
|
| 314 |
+
|---------------------------|------------------------------|--------------|-------------|------------|-----------|
|
| 315 |
+
| Checkworthiness | CT24_T1 | 2 | 22,403 | 1,031 | 318 |
|
| 316 |
+
| Claim | claim-detection | 2 | 23,224 | 7,267 | 5,815 |
|
| 317 |
+
| Cyberbullying | Cyberbullying | 6 | 32,551 | 9,473 | 4,751 |
|
| 318 |
+
| Emotion | emotion | 6 | 280,551 | 82,454 | 41,429 |
|
| 319 |
+
| Factuality | News_dataset | 2 | 28,147 | 8,616 | 4,376 |
|
| 320 |
+
| Factuality | Politifact | 6 | 14,799 | 4,230 | 2,116 |
|
| 321 |
+
| News Genre Categorization | CNN_News_Articles_2011-2022 | 6 | 32,193 | 5,682 | 9,663 |
|
| 322 |
+
| News Genre Categorization | News_Category_Dataset | 42 | 145,748 | 41,740 | 20,899 |
|
| 323 |
+
| News Genre Categorization | SemEval23T3-subtask1 | 3 | 302 | 83 | 130 |
|
| 324 |
+
| Summarization | xlsum | -- | 306,493 | 11,535 | 11,535 |
|
| 325 |
+
| Offensive Language | Offensive_Hateful_Dataset_New | 2 | 42,000 | 5,252 | 5,254 |
|
| 326 |
+
| Offensive Language | offensive_language_dataset | 2 | 29,216 | 3,653 | 3,653 |
|
| 327 |
+
| Offensive/Hate-Speech | hate-offensive-speech | 3 | 48,944 | 2,799 | 2,802 |
|
| 328 |
+
| Propaganda | QProp | 2 | 35,986 | 10,159 | 5,125 |
|
| 329 |
+
| Sarcasm | News-Headlines-Dataset-For-Sarcasm-Detection | 2 | 19,965 | 5,719 | 2,858 |
|
| 330 |
+
| Sentiment | NewsMTSC-dataset | 3 | 7,739 | 747 | 320 |
|
| 331 |
+
| Subjectivity | clef2024-checkthat-lab | 2 | 825 | 484 | 219 |
|
| 332 |
+
|
| 333 |
+
|
| 334 |
+
## Results
|
| 335 |
+
|
| 336 |
+
Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refers to the English-instructed model and *"Native"* refers to the model trained with native language instructions. The results are compared against the SOTA (where available) and the Base: **Llama-Instruct 3.1 baseline**. The **Δ** (Delta) column indicates the difference between LlamaLens and the SOTA performance, calculated as (LlamaLens – SOTA).
|
| 337 |
+
|
| 338 |
+
|
| 339 |
+
| **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** |
|
| 340 |
+
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
| 341 |
+
| Checkworthiness Detection | CT24_checkworthy | f1_pos | 0.753 | 0.404 | 0.942 | 0.942 | 0.189 |
|
| 342 |
+
| Claim Detection | claim-detection | Mi-F1 | -- | 0.545 | 0.864 | 0.889 | -- |
|
| 343 |
+
| Cyberbullying Detection | Cyberbullying | Acc | 0.907 | 0.175 | 0.836 | 0.855 | -0.071 |
|
| 344 |
+
| Emotion Detection | emotion | Ma-F1 | 0.790 | 0.353 | 0.803 | 0.808 | 0.013 |
|
| 345 |
+
| Factuality | News_dataset | Acc | 0.920 | 0.654 | 1.000 | 1.000 | 0.080 |
|
| 346 |
+
| Factuality | Politifact | W-F1 | 0.490 | 0.121 | 0.287 | 0.311 | -0.203 |
|
| 347 |
+
| News Categorization | CNN_News_Articles_2011-2022 | Acc | 0.940 | 0.644 | 0.970 | 0.970 | 0.030 |
|
| 348 |
+
| News Categorization | News_Category_Dataset | Ma-F1 | 0.769 | 0.970 | 0.824 | 0.520 | 0.055 |
|
| 349 |
+
| News Genre Categorisation | SemEval23T3-subtask1 | Mi-F1 | 0.815 | 0.687 | 0.241 | 0.253 | -0.574 |
|
| 350 |
+
| News Summarization | xlsum | R-2 | 0.152 | 0.074 | 0.182 | 0.181 | 0.030 |
|
| 351 |
+
| Offensive Language Detection | Offensive_Hateful_Dataset_New | Mi-F1 | -- | 0.692 | 0.814 | 0.813 | -- |
|
| 352 |
+
| Offensive Language Detection | offensive_language_dataset | Mi-F1 | 0.994 | 0.646 | 0.899 | 0.893 | -0.095 |
|
| 353 |
+
| Offensive Language and Hate Speech | hate-offensive-speech | Acc | 0.945 | 0.602 | 0.931 | 0.935 | -0.014 |
|
| 354 |
+
| Propaganda Detection | QProp | Ma-F1 | 0.667 | 0.759 | 0.963 | 0.973 | 0.296 |
|
| 355 |
+
| Sarcasm Detection | News-Headlines-Dataset-For-Sarcasm-Detection | Acc | 0.897 | 0.668 | 0.936 | 0.947 | 0.039 |
|
| 356 |
+
| Sentiment Classification | NewsMTSC-dataset | Ma-F1 | 0.817 | 0.628 | 0.751 | 0.748 | -0.066 |
|
| 357 |
+
| Subjectivity Detection | clef2024-checkthat-lab | Ma-F1 | 0.744 | 0.535 | 0.642 | 0.628 | -0.102 |
|
| 358 |
+
|
|
| 359 |
+
|
| 360 |
## File Format
|
| 361 |
|
| 362 |
Each JSONL file in the dataset follows a structured format with the following fields:
|