Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -445,6 +445,24 @@ Changing the order of words in sentences, testing whether tokenizers can handle
|
|
| 445 |
Manipulating spacing between words by adding extra spaces, removing spaces, or inserting invisible zero-width characters that affect how text is segmented.
|
| 446 |
|
| 447 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 448 |
#### Who are the source data producers?
|
| 449 |
|
| 450 |
Native Farsi speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
|
|
|
|
| 445 |
Manipulating spacing between words by adding extra spaces, removing spaces, or inserting invisible zero-width characters that affect how text is segmented.
|
| 446 |
|
| 447 |
|
| 448 |
+
#### Model Performance Comparison
|
| 449 |
+
| model_name | arabic_keyboard_for_farsi | canonical | code_language_script_switching | colloquial | dialects | equivalent_expressions | keyboard_proximity_errors | number_romanization | optional_diacritics | romanization | spelled_out | word_spacing_zero-width_characters_extra_space |
|
| 450 |
+
|:-------------|----------------------------:|------------:|---------------------------------:|-------------:|-----------:|-------------------------:|----------------------------:|----------------------:|----------------------:|---------------:|--------------:|-------------------------------------------------:|
|
| 451 |
+
| Aya | 0.346 | 0.78 | 0.717 | 0.661 | 0.529 | 0.607 | 0.409 | 0.744 | 0.438 | 0.346 | 0.458 | 0.557 |
|
| 452 |
+
| BLOOM | 0.448 | 0.775 | 0.77 | 0.6 | 0.505 | 0.675 | 0.571 | 0.669 | 0.505 | 0.276 | 0.542 | 0.589 |
|
| 453 |
+
| ByT5 | 0.478 | 0.769 | 0.719 | 0.591 | 0.531 | 0.616 | 0.527 | 0.568 | 0.446 | 0.28 | 0.337 | 0.476 |
|
| 454 |
+
| Comma | 0.471 | 0.79 | 0.66 | 0.652 | 0.523 | 0.66 | 0.503 | 0.617 | 0.457 | 0.449 | 0.291 | 0.484 |
|
| 455 |
+
| GPT-2 | 0.569 | 0.78 | 0.672 | 0.739 | 0.545 | 0.66 | 0.616 | 0.498 | 0.436 | 0.298 | 0.449 | 0.573 |
|
| 456 |
+
| GPT-4o | 0.406 | 0.75 | 0.744 | 0.669 | 0.504 | 0.744 | 0.588 | 0.752 | 0.375 | 0.306 | 0.466 | 0.544 |
|
| 457 |
+
| Gemma-2 | 0.375 | 0.75 | 0.569 | 0.688 | 0.475 | 0.712 | 0.544 | 0.44 | 0.431 | 0.425 | 0.446 | 0.5 |
|
| 458 |
+
| Llama-3.2 | 0.355 | 0.743 | 0.688 | 0.587 | 0.55 | 0.675 | 0.499 | 0.907 | 0.291 | 0.304 | 0.429 | 0.46 |
|
| 459 |
+
| Phi-3 | 0.48 | 0.82 | 0.675 | 0.593 | 0.501 | 0.63 | 0.542 | 0.555 | 0.493 | 0.328 | 0.469 | 0.593 |
|
| 460 |
+
| Qwen-3 | 0.428 | 0.857 | 0.643 | 0.545 | 0.541 | 0.59 | 0.534 | 0.644 | 0.455 | 0.252 | 0.384 | 0.473 |
|
| 461 |
+
| Tekken | 0.481 | 0.842 | 0.743 | 0.594 | 0.51 | 0.697 | 0.561 | 0.853 | 0.449 | 0.318 | 0.522 | 0.547 |
|
| 462 |
+
| TokenMonster | 0.533 | 0.714 | 0.622 | 0.671 | 0.521 | 0.61 | 0.542 | 0.728 | 0.523 | 0.352 | 0.318 | 0.519 |
|
| 463 |
+
| XGLM | 0.499 | 0.757 | 0.669 | 0.558 | 0.522 | 0.706 | 0.539 | 0.644 | 0.462 | 0.297 | 0.415 | 0.559 |
|
| 464 |
+
| mBERT | 0.377 | 0.746 | 0.678 | 0.678 | 0.508 | 0.659 | 0.585 | 0.402 | 0.547 | 0.414 | 0.296 | 0.659 |
|
| 465 |
+
|
| 466 |
#### Who are the source data producers?
|
| 467 |
|
| 468 |
Native Farsi speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
|