Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
multilinguality: multilingual
|
| 4 |
task_categories:
|
| 5 |
- multiple-choice
|
|
@@ -7,6 +7,7 @@ pretty_name: Tokenization Robustness
|
|
| 7 |
tags:
|
| 8 |
- multilingual
|
| 9 |
- tokenization
|
|
|
|
| 10 |
dataset_info:
|
| 11 |
- config_name: tokenizer_robustness_completion_general_abbreviations
|
| 12 |
features:
|
|
@@ -777,7 +778,11 @@ configs:
|
|
| 777 |
data_files:
|
| 778 |
- split: test
|
| 779 |
path: tokenizer_robustness_completion_general_unusual_formatting/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 780 |
---
|
| 781 |
## TokSuite Bonus Benchmarks (General Collection)
|
| 782 |
|
| 783 |
-
This dataset provides a **bonus set of TokSuite benchmarks** designed to probe tokenizer robustness under **language-agnostic, cross-domain surface-form perturbations** that commonly occur in real-world text. The General collection includes canonical questions alongside targeted perturbations such as abbreviations, character deletion, currency symbol usage, diverse date formats, and unusual or non-standard formatting. Unlike language-specific TokSuite subsets, these benchmarks focus on **universal tokenization stressors** that arise across languages, domains, and writing contexts, offering a compact but high-signal evaluation suite for analyzing how tokenizers handle formatting irregularities, symbol-heavy text, and noisy inputs independent of linguistic morphology.
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
multilinguality: multilingual
|
| 4 |
task_categories:
|
| 5 |
- multiple-choice
|
|
|
|
| 7 |
tags:
|
| 8 |
- multilingual
|
| 9 |
- tokenization
|
| 10 |
+
- robustness
|
| 11 |
dataset_info:
|
| 12 |
- config_name: tokenizer_robustness_completion_general_abbreviations
|
| 13 |
features:
|
|
|
|
| 778 |
data_files:
|
| 779 |
- split: test
|
| 780 |
path: tokenizer_robustness_completion_general_unusual_formatting/test-*
|
| 781 |
+
language:
|
| 782 |
+
- en
|
| 783 |
+
size_categories:
|
| 784 |
+
- n<1K
|
| 785 |
---
|
| 786 |
## TokSuite Bonus Benchmarks (General Collection)
|
| 787 |
|
| 788 |
+
This dataset provides a **bonus set of TokSuite benchmarks** designed to probe tokenizer robustness under **language-agnostic, cross-domain surface-form perturbations** that commonly occur in real-world text. The General collection includes canonical questions alongside targeted perturbations such as abbreviations, character deletion, currency symbol usage, diverse date formats, and unusual or non-standard formatting. Unlike language-specific TokSuite subsets, these benchmarks focus on **universal tokenization stressors** that arise across languages, domains, and writing contexts, offering a compact but high-signal evaluation suite for analyzing how tokenizers handle formatting irregularities, symbol-heavy text, and noisy inputs independent of linguistic morphology.
|