Add paper link and Github link
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,601 +1,503 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
-
|
| 5 |
-
language:
|
| 6 |
-
- ar
|
| 7 |
-
tags:
|
| 8 |
-
- Social Media
|
| 9 |
-
- News Media
|
| 10 |
-
- Sentiment
|
| 11 |
-
- Stance
|
| 12 |
-
- Emotion
|
| 13 |
pretty_name: 'LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content -- Arabic'
|
| 14 |
-
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
dataset_info:
|
| 17 |
- config_name: SANADAkhbarona-news-categorization
|
| 18 |
splits:
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
- config_name: CT22Harmful
|
| 26 |
splits:
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
- config_name: Mawqif-Arabic-Stance-main
|
| 34 |
splits:
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
- config_name: CT22Claim
|
| 42 |
splits:
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
- config_name: annotated-hatetweets-4-classes
|
| 50 |
splits:
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
- config_name: ar_reviews_100k
|
| 58 |
splits:
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
- config_name: Arafacts
|
| 66 |
splits:
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
- config_name: OSACT4SubtaskA
|
| 74 |
splits:
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
- config_name: SANADAlArabiya-news-categorization
|
| 82 |
splits:
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
- config_name: ArPro
|
| 90 |
splits:
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
- config_name: xlsum
|
| 98 |
splits:
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
- config_name: ArSarcasm-v2
|
| 106 |
splits:
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
- config_name: COVID19Factuality
|
| 114 |
splits:
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
- config_name: Emotional-Tone
|
| 122 |
splits:
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
- config_name: ans-claim
|
| 130 |
splits:
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
- config_name: ArCyc_OFF
|
| 138 |
splits:
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
- config_name: CT24_checkworthy
|
| 146 |
splits:
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
- config_name: stance
|
| 154 |
splits:
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
- config_name: NewsHeadline
|
| 162 |
splits:
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
- config_name: NewsCredibilityDataset
|
| 170 |
splits:
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
- config_name: UltimateDataset
|
| 178 |
splits:
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
- config_name: ThatiAR
|
| 186 |
splits:
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
- config_name: ArSAS
|
| 194 |
splits:
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
| 201 |
- config_name: CT22Attentionworthy
|
| 202 |
splits:
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
|
| 206 |
-
|
| 207 |
-
|
| 208 |
-
|
| 209 |
- config_name: ASND
|
| 210 |
splits:
|
| 211 |
-
|
| 212 |
-
|
| 213 |
-
|
| 214 |
-
|
| 215 |
-
|
| 216 |
-
|
| 217 |
- config_name: OSACT4SubtaskB
|
| 218 |
splits:
|
| 219 |
-
|
| 220 |
-
|
| 221 |
-
|
| 222 |
-
|
| 223 |
-
|
| 224 |
-
|
| 225 |
- config_name: ArCyc_CB
|
| 226 |
splits:
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
| 232 |
-
|
| 233 |
- config_name: SANADAlkhaleej-news-categorization
|
| 234 |
splits:
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
configs:
|
| 242 |
- config_name: SANADAkhbarona-news-categorization
|
| 243 |
data_files:
|
| 244 |
-
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
|
| 248 |
-
|
| 249 |
-
|
| 250 |
- config_name: CT22Harmful
|
| 251 |
data_files:
|
| 252 |
-
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
|
| 257 |
-
|
| 258 |
- config_name: Mawqif-Arabic-Stance-main
|
| 259 |
data_files:
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
|
| 263 |
-
|
| 264 |
-
|
| 265 |
-
|
| 266 |
- config_name: CT22Claim
|
| 267 |
data_files:
|
| 268 |
-
|
| 269 |
-
|
| 270 |
-
|
| 271 |
-
|
| 272 |
-
|
| 273 |
-
|
| 274 |
- config_name: annotated-hatetweets-4-classes
|
| 275 |
data_files:
|
| 276 |
-
|
| 277 |
-
|
| 278 |
-
|
| 279 |
-
|
| 280 |
-
|
| 281 |
-
|
| 282 |
- config_name: ar_reviews_100k
|
| 283 |
data_files:
|
| 284 |
-
|
| 285 |
-
|
| 286 |
-
|
| 287 |
-
|
| 288 |
-
|
| 289 |
-
|
| 290 |
- config_name: Arafacts
|
| 291 |
data_files:
|
| 292 |
-
|
| 293 |
-
|
| 294 |
-
|
| 295 |
-
|
| 296 |
-
|
| 297 |
-
|
| 298 |
- config_name: OSACT4SubtaskA
|
| 299 |
data_files:
|
| 300 |
-
|
| 301 |
-
|
| 302 |
-
|
| 303 |
-
|
| 304 |
-
|
| 305 |
-
|
| 306 |
- config_name: SANADAlArabiya-news-categorization
|
| 307 |
data_files:
|
| 308 |
-
|
| 309 |
-
|
| 310 |
-
|
| 311 |
-
|
| 312 |
-
|
| 313 |
-
|
| 314 |
- config_name: ArPro
|
| 315 |
data_files:
|
| 316 |
-
|
| 317 |
-
|
| 318 |
-
|
| 319 |
-
|
| 320 |
-
|
| 321 |
-
|
| 322 |
- config_name: xlsum
|
| 323 |
data_files:
|
| 324 |
-
|
| 325 |
-
|
| 326 |
-
|
| 327 |
-
|
| 328 |
-
|
| 329 |
-
|
| 330 |
- config_name: ArSarcasm-v2
|
| 331 |
data_files:
|
| 332 |
-
|
| 333 |
-
|
| 334 |
-
|
| 335 |
-
|
| 336 |
-
|
| 337 |
-
|
| 338 |
- config_name: COVID19Factuality
|
| 339 |
data_files:
|
| 340 |
-
|
| 341 |
-
|
| 342 |
-
|
| 343 |
-
|
| 344 |
-
|
| 345 |
-
|
| 346 |
- config_name: Emotional-Tone
|
| 347 |
data_files:
|
| 348 |
-
|
| 349 |
-
|
| 350 |
-
|
| 351 |
-
|
| 352 |
-
|
| 353 |
-
|
| 354 |
- config_name: ans-claim
|
| 355 |
data_files:
|
| 356 |
-
|
| 357 |
-
|
| 358 |
-
|
| 359 |
-
|
| 360 |
-
|
| 361 |
-
|
| 362 |
- config_name: ArCyc_OFF
|
| 363 |
data_files:
|
| 364 |
-
|
| 365 |
-
|
| 366 |
-
|
| 367 |
-
|
| 368 |
-
|
| 369 |
-
|
| 370 |
- config_name: CT24_checkworthy
|
| 371 |
data_files:
|
| 372 |
-
|
| 373 |
-
|
| 374 |
-
|
| 375 |
-
|
| 376 |
-
|
| 377 |
-
|
| 378 |
- config_name: stance
|
| 379 |
data_files:
|
| 380 |
-
|
| 381 |
-
|
| 382 |
-
|
| 383 |
-
|
| 384 |
-
|
| 385 |
-
|
| 386 |
- config_name: NewsHeadline
|
| 387 |
data_files:
|
| 388 |
-
|
| 389 |
-
|
| 390 |
-
|
| 391 |
-
|
| 392 |
-
|
| 393 |
-
|
| 394 |
- config_name: NewsCredibilityDataset
|
| 395 |
data_files:
|
| 396 |
-
|
| 397 |
-
|
| 398 |
-
|
| 399 |
-
|
| 400 |
-
|
| 401 |
-
|
| 402 |
- config_name: UltimateDataset
|
| 403 |
data_files:
|
| 404 |
-
|
| 405 |
-
|
| 406 |
-
|
| 407 |
-
|
| 408 |
-
|
| 409 |
-
|
| 410 |
- config_name: ThatiAR
|
| 411 |
data_files:
|
| 412 |
-
|
| 413 |
-
|
| 414 |
-
|
| 415 |
-
|
| 416 |
-
|
| 417 |
-
|
| 418 |
- config_name: ArSAS
|
| 419 |
data_files:
|
| 420 |
-
|
| 421 |
-
|
| 422 |
-
|
| 423 |
-
|
| 424 |
-
|
| 425 |
-
|
| 426 |
- config_name: CT22Attentionworthy
|
| 427 |
data_files:
|
| 428 |
-
|
| 429 |
-
|
| 430 |
-
|
| 431 |
-
|
| 432 |
-
|
| 433 |
-
|
| 434 |
- config_name: ASND
|
| 435 |
data_files:
|
| 436 |
-
|
| 437 |
-
|
| 438 |
-
|
| 439 |
-
|
| 440 |
-
|
| 441 |
-
|
| 442 |
- config_name: OSACT4SubtaskB
|
| 443 |
data_files:
|
| 444 |
-
|
| 445 |
-
|
| 446 |
-
|
| 447 |
-
|
| 448 |
-
|
| 449 |
-
|
| 450 |
- config_name: ArCyc_CB
|
| 451 |
data_files:
|
| 452 |
-
|
| 453 |
-
|
| 454 |
-
|
| 455 |
-
|
| 456 |
-
|
| 457 |
-
|
| 458 |
- config_name: SANADAlkhaleej-news-categorization
|
| 459 |
data_files:
|
| 460 |
-
|
| 461 |
-
|
| 462 |
-
|
| 463 |
-
|
| 464 |
-
|
| 465 |
-
|
| 466 |
---
|
| 467 |
|
| 468 |
-
# LlamaLens: Specialized Multilingual LLM Dataset
|
| 469 |
-
|
| 470 |
-
## Overview
|
| 471 |
-
LlamaLens is a specialized multilingual LLM designed for analyzing news and social media content. It focuses on 18 NLP tasks, leveraging 52 datasets across Arabic, English, and Hindi.
|
| 472 |
-
|
| 473 |
-
|
| 474 |
-
<p align="center"> <img src="./capablities_tasks_datasets.png" style="width: 40%;" id="title-icon"> </p>
|
| 475 |
-
|
| 476 |
-
## LlamaLens
|
| 477 |
-
This repo includes scripts needed to run our full pipeline, including data preprocessing and sampling, instruction dataset creation, model fine-tuning, inference and evaluation.
|
| 478 |
-
|
| 479 |
-
### Features
|
| 480 |
-
- Multilingual support (Arabic, English, Hindi)
|
| 481 |
-
- 18 NLP tasks with 52 datasets
|
| 482 |
-
- Optimized for news and social media content analysis
|
| 483 |
-
|
| 484 |
-
## 📂 Dataset Overview
|
| 485 |
|
| 486 |
-
|
| 487 |
|
| 488 |
-
|
| 489 |
-
|---------------------------|------------------------------|--------------|-------------|------------|-----------|
|
| 490 |
-
| Attentionworthiness | CT22Attentionworthy | 9 | 2,470 | 1,186 | 1,071 |
|
| 491 |
-
| Checkworthiness | CT24_T1 | 2 | 22,403 | 500 | 1,093 |
|
| 492 |
-
| Claim | CT22Claim | 2 | 3,513 | 1,248 | 339 |
|
| 493 |
-
| Cyberbullying | ArCyc_CB | 2 | 3,145 | 900 | 451 |
|
| 494 |
-
| Emotion | Emotional-Tone | 8 | 7,024 | 2,009 | 1,005 |
|
| 495 |
-
| Emotion | NewsHeadline | 7 | 939 | 323 | 160 |
|
| 496 |
-
| Factuality | Arafacts | 5 | 4,354 | 1,245 | 623 |
|
| 497 |
-
| Factuality | COVID19Factuality | 2 | 3,513 | 988 | 339 |
|
| 498 |
-
| Harmful | CT22Harmful | 2 | 2,484 | 1,201 | 1,076 |
|
| 499 |
-
| Hate Speech | annotated-hatetweets-4-classes | 4 | 210,526 | 100,565 | 90,544 |
|
| 500 |
-
| Hate Speech | OSACT4SubtaskB | 2 | 4,778 | 1,827 | 2,048 |
|
| 501 |
-
| News Genre Categorization | ASND | 10 | 74,496 | 21,942 | 11,136 |
|
| 502 |
-
| News Genre Categorization | SANADAkhbarona | 7 | 62,210 | 7,824 | 7,824 |
|
| 503 |
-
| News Genre Categorization | SANADAlArabiya | 6 | 56,967 | 7,123 | 7,120 |
|
| 504 |
-
| News Genre Categorization | SANADAlkhaleej | 7 | 36,391 | 4,550 | 4,550 |
|
| 505 |
-
| News Genre Categorization | UltimateDataset | 10 | 133,036 | 38,456 | 19,269 |
|
| 506 |
-
| News Credibility | NewsCredibilityDataset | 2 | 8,671 | 2,730 | 1,426 |
|
| 507 |
-
| Summarization | xlsum | -- | 37,425 | 4,689 | 4,689 |
|
| 508 |
-
| Offensive Language | ArCyc_OFF | 2 | 3,138 | 900 | 450 |
|
| 509 |
-
| Offensive Language | OSACT4SubtaskA | 2 | 4,780 | 1,827 | 2,047 |
|
| 510 |
-
| Propaganda | ArPro | 2 | 6,002 | 1,326 | 672 |
|
| 511 |
-
| Sarcasm | ArSarcasm-v2 | 2 | 8,749 | 2,996 | 3,761 |
|
| 512 |
-
| Sentiment | ar_reviews_100k | 3 | 69,998 | 20,000 | 10,000 |
|
| 513 |
-
| Sentiment | ArSAS | 4 | 13,883 | 3,976 | 1,987 |
|
| 514 |
-
| Stance | Mawqif-Arabic-Stance-main | 2 | 3,162 | 560 | 950 |
|
| 515 |
-
| Stance | stance | 3 | 2,652 | 379 | 755 |
|
| 516 |
-
| Subjectivity | ThatiAR | 2 | 2,446 | 748 | 467 |
|
| 517 |
|
|
|
|
|
|
|
|
|
|
| 518 |
|
| 519 |
-
|
| 520 |
|
| 521 |
-
|
| 522 |
|
|
|
|
| 523 |
|
| 524 |
---
|
| 525 |
|
| 526 |
-
|
| 527 |
-
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
| 528 |
-
| Attentionworthiness Detection | CT22Attentionworthy | W-F1 | 0.412 | 0.158 | 0.425 | 0.454 | 0.013 |
|
| 529 |
-
| Checkworthiness Detection | CT24_checkworthy | F1_Pos | 0.569 | 0.610 | 0.502 | 0.509 | -0.067 |
|
| 530 |
-
| Claim Detection | CT22Claim | Acc | 0.703 | 0.581 | 0.734 | 0.756 | 0.031 |
|
| 531 |
-
| Cyberbullying Detection | ArCyc_CB | Acc | 0.863 | 0.766 | 0.870 | 0.833 | 0.007 |
|
| 532 |
-
| Emotion Detection | Emotional-Tone | W-F1 | 0.658 | 0.358 | 0.705 | 0.736 | 0.047 |
|
| 533 |
-
| Emotion Detection | NewsHeadline | Acc | 1.000 | 0.406 | 0.480 | 0.458 | -0.520 |
|
| 534 |
-
| Factuality | Arafacts | Mi-F1 | 0.850 | 0.210 | 0.771 | 0.738 | -0.079 |
|
| 535 |
-
| Factuality | COVID19Factuality | W-F1 | 0.831 | 0.492 | 0.800 | 0.840 | -0.031 |
|
| 536 |
-
| Harmfulness Detection | CT22Harmful | F1_Pos | 0.557 | 0.507 | 0.523 | 0.535 | -0.034 |
|
| 537 |
-
| Hate Speech Detection | annotated-hatetweets-4-classes | W-F1 | 0.630 | 0.257 | 0.526 | 0.517 | -0.104 |
|
| 538 |
-
| Hate Speech Detection | OSACT4SubtaskB | Mi-F1 | 0.950 | 0.819 | 0.955 | 0.955 | 0.005 |
|
| 539 |
-
| News Categorization | ASND | Ma-F1 | 0.770 | 0.587 | 0.919 | 0.929 | 0.149 |
|
| 540 |
-
| News Categorization | SANADAkhbarona-news-categorization | Acc | 0.940 | 0.784 | 0.954 | 0.953 | 0.014 |
|
| 541 |
-
| News Categorization | SANADAlArabiya-news-categorization | Acc | 0.974 | 0.893 | 0.987 | 0.985 | 0.013 |
|
| 542 |
-
| News Categorization | SANADAlkhaleej-news-categorization | Acc | 0.986 | 0.865 | 0.984 | 0.982 | -0.002 |
|
| 543 |
-
| News Categorization | UltimateDataset | Ma-F1 | 0.970 | 0.376 | 0.865 | 0.880 | -0.105 |
|
| 544 |
-
| News Credibility | NewsCredibilityDataset | Acc | 0.899 | 0.455 | 0.935 | 0.933 | 0.036 |
|
| 545 |
-
| News Summarization | xlsum | R-2 | 0.137 | 0.034 | 0.129 | 0.130 | -0.009 |
|
| 546 |
-
| Offensive Language Detection | ArCyc_OFF | Ma-F1 | 0.878 | 0.489 | 0.877 | 0.879 | -0.001 |
|
| 547 |
-
| Offensive Language Detection | OSACT4SubtaskA | Ma-F1 | 0.905 | 0.782 | 0.896 | 0.882 | -0.009 |
|
| 548 |
-
| Propaganda Detection | ArPro | Mi-F1 | 0.767 | 0.597 | 0.747 | 0.731 | -0.020 |
|
| 549 |
-
| Sarcasm Detection | ArSarcasm-v2 | F1_Pos | 0.584 | 0.477 | 0.520 | 0.542 | -0.064 |
|
| 550 |
-
| Sentiment Classification | ar_reviews_100k | F1_Pos | -- | 0.681 | 0.785 | 0.779 | -- |
|
| 551 |
-
| Sentiment Classification | ArSAS | Acc | 0.920 | 0.603 | 0.800 | 0.804 | -0.120 |
|
| 552 |
-
| Stance Detection | stance | Ma-F1 | 0.767 | 0.608 | 0.926 | 0.881 | 0.159 |
|
| 553 |
-
| Stance Detection | Mawqif-Arabic-Stance-main | Ma-F1 | 0.789 | 0.764 | 0.853 | 0.826 | 0.065 |
|
| 554 |
-
| Subjectivity Detection | ThatiAR | f1_pos | 0.800 | 0.562 | 0.441 | 0.383 | -0.359 |
|
| 555 |
|
| 556 |
-
|
| 557 |
-
|
| 558 |
-
|
| 559 |
-
|
| 560 |
-
|
|
|
|
| 561 |
|
| 562 |
-
- `id`: Unique identifier for each data entry.
|
| 563 |
-
- `original_id`: Identifier from the original dataset, if available.
|
| 564 |
-
- `input`: The original text that needs to be analyzed.
|
| 565 |
-
- `output`: The label assigned to the text after analysis.
|
| 566 |
-
- `dataset`: Name of the dataset the entry belongs.
|
| 567 |
-
- `task`: The specific task type.
|
| 568 |
-
- `lang`: The language of the input text.
|
| 569 |
-
- `instructions`: A brief set of instructions describing how the text should be labeled.
|
| 570 |
|
|
|
|
| 571 |
|
| 572 |
-
**
|
| 573 |
-
|
| 574 |
-
```
|
| 575 |
-
{
|
| 576 |
-
"id": "c64503bb-9253-4f58-aef8-9b244c088b15",
|
| 577 |
-
"original_id": "1,722,643,241,323,950,300",
|
| 578 |
-
"input": "يريدون توريط السلطة الفلسطينية في الضفة ودق آخر مسمار في نعش ما تبقى من هويتنا الفلسطينية، كما تم توريط غزة. يريدون إعلان كفاح مسلح من طرف الأجهزة الأمنية الفلسطينية علناً! لكن ما يعلمونه وما يرونه ولا يريدون التحدث به، أن أبناء الأجهزة الأمنية في النهار يكونون عسكريين... https://t.co/qF2Fjh24hV https://t.co/1UicLkDd52",
|
| 579 |
-
"output": "checkworthy",
|
| 580 |
-
"dataset": "Checkworthiness",
|
| 581 |
-
"task": "Checkworthiness",
|
| 582 |
-
"lang": "ar",
|
| 583 |
-
"instructions": "Identify if the given factual claim is 'checkworthy' or 'not-checkworthy'. Return only the label without any explanation, justification, or additional text."
|
| 584 |
-
}
|
| 585 |
-
|
| 586 |
-
|
| 587 |
-
```
|
| 588 |
-
## Model
|
| 589 |
-
[**LlamaLens on Hugging Face**](https://huggingface.co/QCRI/LlamaLens)
|
| 590 |
-
|
| 591 |
-
## Replication Scripts
|
| 592 |
-
[**LlamaLens GitHub Repository**](https://github.com/firojalam/LlamaLens)
|
| 593 |
-
|
| 594 |
-
## 📢 Citation
|
| 595 |
-
|
| 596 |
-
If you use this dataset, please cite our [paper](https://arxiv.org/pdf/2410.15308):
|
| 597 |
|
| 598 |
-
```
|
| 599 |
@article{kmainasi2024llamalensspecializedmultilingualllm,
|
| 600 |
title={LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content},
|
| 601 |
author={Mohamed Bayan Kmainasi and Ali Ezzat Shahroor and Maram Hasanain and Sahinur Rahman Laskar and Naeemul Hassan and Firoj Alam},
|
|
@@ -609,4 +511,4 @@ If you use this dataset, please cite our [paper](https://arxiv.org/pdf/2410.1530
|
|
| 609 |
archivePrefix={arXiv},
|
| 610 |
primaryClass={cs.CL}
|
| 611 |
}
|
| 612 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- ar
|
| 4 |
license: cc-by-nc-sa-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
task_categories:
|
| 8 |
+
- text-classification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
pretty_name: 'LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content -- Arabic'
|
| 10 |
+
tags:
|
| 11 |
+
- Social Media
|
| 12 |
+
- News Media
|
| 13 |
+
- Sentiment
|
| 14 |
+
- Stance
|
| 15 |
+
- Emotion
|
| 16 |
dataset_info:
|
| 17 |
- config_name: SANADAkhbarona-news-categorization
|
| 18 |
splits:
|
| 19 |
+
- name: train
|
| 20 |
+
num_examples: 62210
|
| 21 |
+
- name: dev
|
| 22 |
+
num_examples: 7824
|
| 23 |
+
- name: test
|
| 24 |
+
num_examples: 7824
|
| 25 |
- config_name: CT22Harmful
|
| 26 |
splits:
|
| 27 |
+
- name: train
|
| 28 |
+
num_examples: 2484
|
| 29 |
+
- name: dev
|
| 30 |
+
num_examples: 1076
|
| 31 |
+
- name: test
|
| 32 |
+
num_examples: 1201
|
| 33 |
- config_name: Mawqif-Arabic-Stance-main
|
| 34 |
splits:
|
| 35 |
+
- name: train
|
| 36 |
+
num_examples: 3162
|
| 37 |
+
- name: dev
|
| 38 |
+
num_examples: 950
|
| 39 |
+
- name: test
|
| 40 |
+
num_examples: 560
|
| 41 |
- config_name: CT22Claim
|
| 42 |
splits:
|
| 43 |
+
- name: train
|
| 44 |
+
num_examples: 3513
|
| 45 |
+
- name: dev
|
| 46 |
+
num_examples: 339
|
| 47 |
+
- name: test
|
| 48 |
+
num_examples: 1248
|
| 49 |
- config_name: annotated-hatetweets-4-classes
|
| 50 |
splits:
|
| 51 |
+
- name: train
|
| 52 |
+
num_examples: 210525
|
| 53 |
+
- name: dev
|
| 54 |
+
num_examples: 90543
|
| 55 |
+
- name: test
|
| 56 |
+
num_examples: 100564
|
| 57 |
- config_name: ar_reviews_100k
|
| 58 |
splits:
|
| 59 |
+
- name: train
|
| 60 |
+
num_examples: 69998
|
| 61 |
+
- name: dev
|
| 62 |
+
num_examples: 10000
|
| 63 |
+
- name: test
|
| 64 |
+
num_examples: 20000
|
| 65 |
- config_name: Arafacts
|
| 66 |
splits:
|
| 67 |
+
- name: train
|
| 68 |
+
num_examples: 4354
|
| 69 |
+
- name: dev
|
| 70 |
+
num_examples: 623
|
| 71 |
+
- name: test
|
| 72 |
+
num_examples: 1245
|
| 73 |
- config_name: OSACT4SubtaskA
|
| 74 |
splits:
|
| 75 |
+
- name: train
|
| 76 |
+
num_examples: 4780
|
| 77 |
+
- name: dev
|
| 78 |
+
num_examples: 2047
|
| 79 |
+
- name: test
|
| 80 |
+
num_examples: 1827
|
| 81 |
- config_name: SANADAlArabiya-news-categorization
|
| 82 |
splits:
|
| 83 |
+
- name: train
|
| 84 |
+
num_examples: 56967
|
| 85 |
+
- name: dev
|
| 86 |
+
num_examples: 7120
|
| 87 |
+
- name: test
|
| 88 |
+
num_examples: 7123
|
| 89 |
- config_name: ArPro
|
| 90 |
splits:
|
| 91 |
+
- name: train
|
| 92 |
+
num_examples: 6002
|
| 93 |
+
- name: dev
|
| 94 |
+
num_examples: 672
|
| 95 |
+
- name: test
|
| 96 |
+
num_examples: 1326
|
| 97 |
- config_name: xlsum
|
| 98 |
splits:
|
| 99 |
+
- name: train
|
| 100 |
+
num_examples: 37425
|
| 101 |
+
- name: dev
|
| 102 |
+
num_examples: 4689
|
| 103 |
+
- name: test
|
| 104 |
+
num_examples: 4689
|
| 105 |
- config_name: ArSarcasm-v2
|
| 106 |
splits:
|
| 107 |
+
- name: train
|
| 108 |
+
num_examples: 8749
|
| 109 |
+
- name: dev
|
| 110 |
+
num_examples: 3761
|
| 111 |
+
- name: test
|
| 112 |
+
num_examples: 2996
|
| 113 |
- config_name: COVID19Factuality
|
| 114 |
splits:
|
| 115 |
+
- name: train
|
| 116 |
+
num_examples: 3513
|
| 117 |
+
- name: dev
|
| 118 |
+
num_examples: 339
|
| 119 |
+
- name: test
|
| 120 |
+
num_examples: 988
|
| 121 |
- config_name: Emotional-Tone
|
| 122 |
splits:
|
| 123 |
+
- name: train
|
| 124 |
+
num_examples: 7024
|
| 125 |
+
- name: dev
|
| 126 |
+
num_examples: 1005
|
| 127 |
+
- name: test
|
| 128 |
+
num_examples: 2009
|
| 129 |
- config_name: ans-claim
|
| 130 |
splits:
|
| 131 |
+
- name: train
|
| 132 |
+
num_examples: 3185
|
| 133 |
+
- name: dev
|
| 134 |
+
num_examples: 906
|
| 135 |
+
- name: test
|
| 136 |
+
num_examples: 456
|
| 137 |
- config_name: ArCyc_OFF
|
| 138 |
splits:
|
| 139 |
+
- name: train
|
| 140 |
+
num_examples: 3138
|
| 141 |
+
- name: dev
|
| 142 |
+
num_examples: 450
|
| 143 |
+
- name: test
|
| 144 |
+
num_examples: 900
|
| 145 |
- config_name: CT24_checkworthy
|
| 146 |
splits:
|
| 147 |
+
- name: train
|
| 148 |
+
num_examples: 7333
|
| 149 |
+
- name: dev
|
| 150 |
+
num_examples: 1093
|
| 151 |
+
- name: test
|
| 152 |
+
num_examples: 610
|
| 153 |
- config_name: stance
|
| 154 |
splits:
|
| 155 |
+
- name: train
|
| 156 |
+
num_examples: 2652
|
| 157 |
+
- name: dev
|
| 158 |
+
num_examples: 755
|
| 159 |
+
- name: test
|
| 160 |
+
num_examples: 379
|
| 161 |
- config_name: NewsHeadline
|
| 162 |
splits:
|
| 163 |
+
- name: train
|
| 164 |
+
num_examples: 939
|
| 165 |
+
- name: dev
|
| 166 |
+
num_examples: 160
|
| 167 |
+
- name: test
|
| 168 |
+
num_examples: 323
|
| 169 |
- config_name: NewsCredibilityDataset
|
| 170 |
splits:
|
| 171 |
+
- name: train
|
| 172 |
+
num_examples: 8671
|
| 173 |
+
- name: dev
|
| 174 |
+
num_examples: 1426
|
| 175 |
+
- name: test
|
| 176 |
+
num_examples: 2730
|
| 177 |
- config_name: UltimateDataset
|
| 178 |
splits:
|
| 179 |
+
- name: train
|
| 180 |
+
num_examples: 133036
|
| 181 |
+
- name: dev
|
| 182 |
+
num_examples: 19269
|
| 183 |
+
- name: test
|
| 184 |
+
num_examples: 38456
|
| 185 |
- config_name: ThatiAR
|
| 186 |
splits:
|
| 187 |
+
- name: train
|
| 188 |
+
num_examples: 2446
|
| 189 |
+
- name: dev
|
| 190 |
+
num_examples: 467
|
| 191 |
+
- name: test
|
| 192 |
+
num_examples: 748
|
| 193 |
- config_name: ArSAS
|
| 194 |
splits:
|
| 195 |
+
- name: train
|
| 196 |
+
num_examples: 13883
|
| 197 |
+
- name: dev
|
| 198 |
+
num_examples: 1987
|
| 199 |
+
- name: test
|
| 200 |
+
num_examples: 3976
|
| 201 |
- config_name: CT22Attentionworthy
|
| 202 |
splits:
|
| 203 |
+
- name: train
|
| 204 |
+
num_examples: 2479
|
| 205 |
+
- name: dev
|
| 206 |
+
num_examples: 1071
|
| 207 |
+
- name: test
|
| 208 |
+
num_examples: 1186
|
| 209 |
- config_name: ASND
|
| 210 |
splits:
|
| 211 |
+
- name: train
|
| 212 |
+
num_examples: 74496
|
| 213 |
+
- name: dev
|
| 214 |
+
num_examples: 11136
|
| 215 |
+
- name: test
|
| 216 |
+
num_examples: 21942
|
| 217 |
- config_name: OSACT4SubtaskB
|
| 218 |
splits:
|
| 219 |
+
- name: train
|
| 220 |
+
num_examples: 4778
|
| 221 |
+
- name: dev
|
| 222 |
+
num_examples: 2048
|
| 223 |
+
- name: test
|
| 224 |
+
num_examples: 1827
|
| 225 |
- config_name: ArCyc_CB
|
| 226 |
splits:
|
| 227 |
+
- name: train
|
| 228 |
+
num_examples: 3145
|
| 229 |
+
- name: dev
|
| 230 |
+
num_examples: 451
|
| 231 |
+
- name: test
|
| 232 |
+
num_examples: 900
|
| 233 |
- config_name: SANADAlkhaleej-news-categorization
|
| 234 |
splits:
|
| 235 |
+
- name: train
|
| 236 |
+
num_examples: 36391
|
| 237 |
+
- name: dev
|
| 238 |
+
num_examples: 4550
|
| 239 |
+
- name: test
|
| 240 |
+
num_examples: 4550
|
| 241 |
configs:
|
| 242 |
- config_name: SANADAkhbarona-news-categorization
|
| 243 |
data_files:
|
| 244 |
+
- split: test
|
| 245 |
+
path: SANADAkhbarona-news-categorization/test.json
|
| 246 |
+
- split: dev
|
| 247 |
+
path: SANADAkhbarona-news-categorization/dev.json
|
| 248 |
+
- split: train
|
| 249 |
+
path: SANADAkhbarona-news-categorization/train.json
|
| 250 |
- config_name: CT22Harmful
|
| 251 |
data_files:
|
| 252 |
+
- split: test
|
| 253 |
+
path: CT22Harmful/test.json
|
| 254 |
+
- split: dev
|
| 255 |
+
path: CT22Harmful/dev.json
|
| 256 |
+
- split: train
|
| 257 |
+
path: CT22Harmful/train.json
|
| 258 |
- config_name: Mawqif-Arabic-Stance-main
|
| 259 |
data_files:
|
| 260 |
+
- split: test
|
| 261 |
+
path: Mawqif-Arabic-Stance-main/test.json
|
| 262 |
+
- split: dev
|
| 263 |
+
path: Mawqif-Arabic-Stance-main/dev.json
|
| 264 |
+
- split: train
|
| 265 |
+
path: Mawqif-Arabic-Stance-main/train.json
|
| 266 |
- config_name: CT22Claim
|
| 267 |
data_files:
|
| 268 |
+
- split: test
|
| 269 |
+
path: CT22Claim/test.json
|
| 270 |
+
- split: dev
|
| 271 |
+
path: CT22Claim/dev.json
|
| 272 |
+
- split: train
|
| 273 |
+
path: CT22Claim/train.json
|
| 274 |
- config_name: annotated-hatetweets-4-classes
|
| 275 |
data_files:
|
| 276 |
+
- split: test
|
| 277 |
+
path: annotated-hatetweets-4-classes/test.json
|
| 278 |
+
- split: dev
|
| 279 |
+
path: annotated-hatetweets-4-classes/dev.json
|
| 280 |
+
- split: train
|
| 281 |
+
path: annotated-hatetweets-4-classes/train.json
|
| 282 |
- config_name: ar_reviews_100k
|
| 283 |
data_files:
|
| 284 |
+
- split: test
|
| 285 |
+
path: ar_reviews_100k/test.json
|
| 286 |
+
- split: dev
|
| 287 |
+
path: ar_reviews_100k/dev.json
|
| 288 |
+
- split: train
|
| 289 |
+
path: ar_reviews_100k/train.json
|
| 290 |
- config_name: Arafacts
|
| 291 |
data_files:
|
| 292 |
+
- split: test
|
| 293 |
+
path: Arafacts/test.json
|
| 294 |
+
- split: dev
|
| 295 |
+
path: Arafacts/dev.json
|
| 296 |
+
- split: train
|
| 297 |
+
path: Arafacts/train.json
|
| 298 |
- config_name: OSACT4SubtaskA
|
| 299 |
data_files:
|
| 300 |
+
- split: test
|
| 301 |
+
path: OSACT4SubtaskA/test.json
|
| 302 |
+
- split: dev
|
| 303 |
+
path: OSACT4SubtaskA/dev.json
|
| 304 |
+
- split: train
|
| 305 |
+
path: OSACT4SubtaskA/train.json
|
| 306 |
- config_name: SANADAlArabiya-news-categorization
|
| 307 |
data_files:
|
| 308 |
+
- split: test
|
| 309 |
+
path: SANADAlArabiya-news-categorization/test.json
|
| 310 |
+
- split: dev
|
| 311 |
+
path: SANADAlArabiya-news-categorization/dev.json
|
| 312 |
+
- split: train
|
| 313 |
+
path: SANADAlArabiya-news-categorization/train.json
|
| 314 |
- config_name: ArPro
|
| 315 |
data_files:
|
| 316 |
+
- split: test
|
| 317 |
+
path: ArPro/test.json
|
| 318 |
+
- split: dev
|
| 319 |
+
path: ArPro/dev.json
|
| 320 |
+
- split: train
|
| 321 |
+
path: ArPro/train.json
|
| 322 |
- config_name: xlsum
|
| 323 |
data_files:
|
| 324 |
+
- split: test
|
| 325 |
+
path: xlsum/test.json
|
| 326 |
+
- split: dev
|
| 327 |
+
path: xlsum/dev.json
|
| 328 |
+
- split: train
|
| 329 |
+
path: xlsum/train.json
|
| 330 |
- config_name: ArSarcasm-v2
|
| 331 |
data_files:
|
| 332 |
+
- split: test
|
| 333 |
+
path: ArSarcasm-v2/test.json
|
| 334 |
+
- split: dev
|
| 335 |
+
path: ArSarcasm-v2/dev.json
|
| 336 |
+
- split: train
|
| 337 |
+
path: ArSarcasm-v2/train.json
|
| 338 |
- config_name: COVID19Factuality
|
| 339 |
data_files:
|
| 340 |
+
- split: test
|
| 341 |
+
path: COVID19Factuality/test.json
|
| 342 |
+
- split: dev
|
| 343 |
+
path: COVID19Factuality/dev.json
|
| 344 |
+
- split: train
|
| 345 |
+
path: COVID19Factuality/train.json
|
| 346 |
- config_name: Emotional-Tone
|
| 347 |
data_files:
|
| 348 |
+
- split: test
|
| 349 |
+
path: Emotional-Tone/test.json
|
| 350 |
+
- split: dev
|
| 351 |
+
path: Emotional-Tone/dev.json
|
| 352 |
+
- split: train
|
| 353 |
+
path: Emotional-Tone/train.json
|
| 354 |
- config_name: ans-claim
|
| 355 |
data_files:
|
| 356 |
+
- split: test
|
| 357 |
+
path: ans-claim/test.json
|
| 358 |
+
- split: dev
|
| 359 |
+
path: ans-claim/dev.json
|
| 360 |
+
- split: train
|
| 361 |
+
path: ans-claim/train.json
|
| 362 |
- config_name: ArCyc_OFF
|
| 363 |
data_files:
|
| 364 |
+
- split: test
|
| 365 |
+
path: ArCyc_OFF/test.json
|
| 366 |
+
- split: dev
|
| 367 |
+
path: ArCyc_OFF/dev.json
|
| 368 |
+
- split: train
|
| 369 |
+
path: ArCyc_OFF/train.json
|
| 370 |
- config_name: CT24_checkworthy
|
| 371 |
data_files:
|
| 372 |
+
- split: test
|
| 373 |
+
path: CT24_checkworthy/test.json
|
| 374 |
+
- split: dev
|
| 375 |
+
path: CT24_checkworthy/dev.json
|
| 376 |
+
- split: train
|
| 377 |
+
path: CT24_checkworthy/train.json
|
| 378 |
- config_name: stance
|
| 379 |
data_files:
|
| 380 |
+
- split: test
|
| 381 |
+
path: stance/test.json
|
| 382 |
+
- split: dev
|
| 383 |
+
path: stance/dev.json
|
| 384 |
+
- split: train
|
| 385 |
+
path: stance/train.json
|
| 386 |
- config_name: NewsHeadline
|
| 387 |
data_files:
|
| 388 |
+
- split: test
|
| 389 |
+
path: NewsHeadline/test.json
|
| 390 |
+
- split: dev
|
| 391 |
+
path: NewsHeadline/dev.json
|
| 392 |
+
- split: train
|
| 393 |
+
path: NewsHeadline/train.json
|
| 394 |
- config_name: NewsCredibilityDataset
|
| 395 |
data_files:
|
| 396 |
+
- split: test
|
| 397 |
+
path: NewsCredibilityDataset/test.json
|
| 398 |
+
- split: dev
|
| 399 |
+
path: NewsCredibilityDataset/dev.json
|
| 400 |
+
- split: train
|
| 401 |
+
path: NewsCredibilityDataset/train.json
|
| 402 |
- config_name: UltimateDataset
|
| 403 |
data_files:
|
| 404 |
+
- split: test
|
| 405 |
+
path: UltimateDataset/test.json
|
| 406 |
+
- split: dev
|
| 407 |
+
path: UltimateDataset/dev.json
|
| 408 |
+
- split: train
|
| 409 |
+
path: UltimateDataset/train.json
|
| 410 |
- config_name: ThatiAR
|
| 411 |
data_files:
|
| 412 |
+
- split: test
|
| 413 |
+
path: ThatiAR/test.json
|
| 414 |
+
- split: dev
|
| 415 |
+
path: ThatiAR/dev.json
|
| 416 |
+
- split: train
|
| 417 |
+
path: ThatiAR/train.json
|
| 418 |
- config_name: ArSAS
|
| 419 |
data_files:
|
| 420 |
+
- split: test
|
| 421 |
+
path: ArSAS/test.json
|
| 422 |
+
- split: dev
|
| 423 |
+
path: ArSAS/dev.json
|
| 424 |
+
- split: train
|
| 425 |
+
path: ArSAS/train.json
|
| 426 |
- config_name: CT22Attentionworthy
|
| 427 |
data_files:
|
| 428 |
+
- split: test
|
| 429 |
+
path: CT22Attentionworthy/test.json
|
| 430 |
+
- split: dev
|
| 431 |
+
path: CT22Attentionworthy/dev.json
|
| 432 |
+
- split: train
|
| 433 |
+
path: CT22Attentionworthy/train.json
|
| 434 |
- config_name: ASND
|
| 435 |
data_files:
|
| 436 |
+
- split: test
|
| 437 |
+
path: ASND/test.json
|
| 438 |
+
- split: dev
|
| 439 |
+
path: ASND/dev.json
|
| 440 |
+
- split: train
|
| 441 |
+
path: ASND/train.json
|
| 442 |
- config_name: OSACT4SubtaskB
|
| 443 |
data_files:
|
| 444 |
+
- split: test
|
| 445 |
+
path: OSACT4SubtaskB/test.json
|
| 446 |
+
- split: dev
|
| 447 |
+
path: OSACT4SubtaskB/dev.json
|
| 448 |
+
- split: train
|
| 449 |
+
path: OSACT4SubtaskB/train.json
|
| 450 |
- config_name: ArCyc_CB
|
| 451 |
data_files:
|
| 452 |
+
- split: test
|
| 453 |
+
path: ArCyc_CB/test.json
|
| 454 |
+
- split: dev
|
| 455 |
+
path: ArCyc_CB/dev.json
|
| 456 |
+
- split: train
|
| 457 |
+
path: ArCyc_CB/train.json
|
| 458 |
- config_name: SANADAlkhaleej-news-categorization
|
| 459 |
data_files:
|
| 460 |
+
- split: test
|
| 461 |
+
path: SANADAlkhaleej-news-categorization/test.json
|
| 462 |
+
- split: dev
|
| 463 |
+
path: SANADAlkhaleej-news-categorization/dev.json
|
| 464 |
+
- split: train
|
| 465 |
+
path: SANADAlkhaleej-news-categorization/train.json
|
| 466 |
---
|
| 467 |
|
| 468 |
+
# LlamaLens: Specialized Multilingual LLM Dataset for Analyzing News and Social Media Content
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 469 |
|
| 470 |
+
This dataset was used in the paper [LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content](https://huggingface.co/papers/2410.15308). It contains 52 datasets across Arabic, English, and Hindi, covering 18 NLP tasks related to news and social media analysis. The tasks include sentiment analysis, stance detection, hate speech detection, and more.
|
| 471 |
|
| 472 |
+
**Key Features:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 473 |
|
| 474 |
+
* **Multilingual:** Arabic, English, and Hindi.
|
| 475 |
+
* **Diverse Tasks:** Sentiment analysis, stance detection, hate speech detection, news categorization, and more.
|
| 476 |
+
* **Large Scale:** Contains over 1 million examples across all datasets.
|
| 477 |
|
| 478 |
+
**Data Format:** Each dataset is provided as a JSONL file. Details on the schema are available in the original paper.
|
| 479 |
|
| 480 |
+
**Code:** [LlamaLens Github Repository](https://github.com/firojalam/LlamaLens)
|
| 481 |
|
| 482 |
+
**Model:** [LlamaLens on Hugging Face](https://huggingface.co/QCRI/LlamaLens)
|
| 483 |
|
| 484 |
---
|
| 485 |
|
| 486 |
+
**(Dataset Statistics - A subset is shown below for brevity; see the metadata for the complete list)**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 487 |
|
| 488 |
+
| Task | Dataset | # Labels | # Train | # Test | # Dev |
|
| 489 |
+
|--------------------------|-----------------------------|----------|---------|--------|-------|
|
| 490 |
+
| News Genre Categorization | SANADAkhbarona-news-categorization | 7 | 62210 | 7824 | 7824 |
|
| 491 |
+
| Hate Speech | annotated-hatetweets-4-classes | 4 | 210525 | 100564 | 90543 |
|
| 492 |
+
| Sentiment | ar_reviews_100k | 3 | 69998 | 20000 | 10000 |
|
| 493 |
+
| ... | ... | ... | ... | ... | ... |
|
| 494 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 495 |
|
| 496 |
+
---
|
| 497 |
|
| 498 |
+
**Citation:**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 499 |
|
| 500 |
+
```bibtex
|
| 501 |
@article{kmainasi2024llamalensspecializedmultilingualllm,
|
| 502 |
title={LlamaLens: Specialized Multilingual LLM for Analyzing News and Social Media Content},
|
| 503 |
author={Mohamed Bayan Kmainasi and Ali Ezzat Shahroor and Maram Hasanain and Sahinur Rahman Laskar and Naeemul Hassan and Firoj Alam},
|
|
|
|
| 511 |
archivePrefix={arXiv},
|
| 512 |
primaryClass={cs.CL}
|
| 513 |
}
|
| 514 |
+
```
|