Datasets:
File size: 20,724 Bytes
21487e2 94ad7af 21487e2 94ad7af 2a0c86f 0b5ae77 2a0c86f 0b5ae77 966345f ef9f04d 6e89efa 91f1d5b 5d5ee3b da82958 01bb30d 7ce1696 6f371ea 5b31429 94ad7af 3d3512c 2a0c86f 0b5ae77 94ad7af 966345f ef9f04d 6e89efa 91f1d5b 5d5ee3b da82958 01bb30d 7ce1696 6f371ea 5b31429 94ad7af 3d3512c 94ad7af 21487e2 94ad7af 2eb29b0 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 94ad7af 2eb29b0 21487e2 2eb29b0 21487e2 94ad7af f64b051 2eb29b0 94ad7af 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 94ad7af 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 94ad7af 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 94ad7af 21487e2 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 94ad7af 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 94ad7af 2eb29b0 f14d2ca 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 94ad7af 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 21487e2 2eb29b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 |
---
license: mit
multilinguality: multilingual
task_categories:
- multiple-choice
pretty_name: Tokenization Robustness
tags:
- multilingual
- tokenization
- robustness
dataset_info:
- config_name: tokenizer_robustness_completion_chinese_canonical
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 8225
num_examples: 40
download_size: 9396
dataset_size: 8225
- config_name: tokenizer_robustness_completion_chinese_code_language_script_switching
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 8136
num_examples: 40
download_size: 8261
dataset_size: 8136
- config_name: tokenizer_robustness_completion_chinese_colloquial
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 7442
num_examples: 39
download_size: 8111
dataset_size: 7442
- config_name: tokenizer_robustness_completion_chinese_equivalent_expressions
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 7907
num_examples: 40
download_size: 8383
dataset_size: 7907
- config_name: tokenizer_robustness_completion_chinese_keyboard_proximity_errors
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 7340
num_examples: 40
download_size: 8251
dataset_size: 7340
- config_name: tokenizer_robustness_completion_chinese_ocr_errors
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 8441
num_examples: 40
download_size: 8307
dataset_size: 8441
- config_name: tokenizer_robustness_completion_chinese_optional_diacritics
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 10200
num_examples: 40
download_size: 8835
dataset_size: 10200
- config_name: tokenizer_robustness_completion_chinese_partially_romanized
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 7680
num_examples: 40
download_size: 8217
dataset_size: 7680
- config_name: tokenizer_robustness_completion_chinese_romanization
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 7859
num_examples: 40
download_size: 8285
dataset_size: 7859
- config_name: tokenizer_robustness_completion_chinese_space_removal
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 10554
num_examples: 40
download_size: 8618
dataset_size: 10554
- config_name: tokenizer_robustness_completion_chinese_spelled_out
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 2583
num_examples: 13
download_size: 6308
dataset_size: 2583
- config_name: tokenizer_robustness_completion_chinese_traditional
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 6125
num_examples: 33
download_size: 7768
dataset_size: 6125
- config_name: >-
tokenizer_robustness_completion_chinese_word_spacing_zero-width_characters_extra_space
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
splits:
- name: test
num_bytes: 8831
num_examples: 40
download_size: 8368
dataset_size: 8831
configs:
- config_name: tokenizer_robustness_completion_chinese_canonical
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_canonical/test-*
- config_name: tokenizer_robustness_completion_chinese_code_language_script_switching
data_files:
- split: test
path: >-
tokenizer_robustness_completion_chinese_code_language_script_switching/test-*
- config_name: tokenizer_robustness_completion_chinese_colloquial
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_colloquial/test-*
- config_name: tokenizer_robustness_completion_chinese_equivalent_expressions
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_equivalent_expressions/test-*
- config_name: tokenizer_robustness_completion_chinese_keyboard_proximity_errors
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_keyboard_proximity_errors/test-*
- config_name: tokenizer_robustness_completion_chinese_ocr_errors
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_ocr_errors/test-*
- config_name: tokenizer_robustness_completion_chinese_optional_diacritics
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_optional_diacritics/test-*
- config_name: tokenizer_robustness_completion_chinese_partially_romanized
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_partially_romanized/test-*
- config_name: tokenizer_robustness_completion_chinese_romanization
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_romanization/test-*
- config_name: tokenizer_robustness_completion_chinese_space_removal
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_space_removal/test-*
- config_name: tokenizer_robustness_completion_chinese_spelled_out
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_spelled_out/test-*
- config_name: tokenizer_robustness_completion_chinese_traditional
data_files:
- split: test
path: tokenizer_robustness_completion_chinese_traditional/test-*
- config_name: >-
tokenizer_robustness_completion_chinese_word_spacing_zero-width_characters_extra_space
data_files:
- split: test
path: >-
tokenizer_robustness_completion_chinese_word_spacing_zero-width_characters_extra_space/test-*
language:
- en
- zh
size_categories:
- n<1K
---
# Dataset Card for Tokenization Robustness
<!-- Provide a quick summary of the dataset. -->
<img src="toksuite-logo.png" alt="TokSuite Logo" width="250px" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# TokSuite Benchmark (Chinese Collection)
## Dataset Description
This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains Chinese language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
- **Curated by:** R3 Research Team
- **Language(s):** Chinese (It)
- **License:** MIT License
### Dataset Summary
TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The Chinese subset specifically measures model performance on canonical questions and various perturbations.
**Key Features:**
- 40 canonical questions covering general knowledge, geography, science, and language understanding
- Multiple perturbation types reflecting real-world text variations in Chinese
- Parallel structure with TokSuite benchmark (available in English, Turkish, Farsi, Italian)
- Native speaker curation ensuring linguistic authenticity
### Supported Tasks
- **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
- **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
- **Multilingual NLP Benchmarking**: Evaluating language models on Chinese text understanding
### Languages
The dataset contains text in Chinese (language code: `zho_Hans` / `zh`).
## Dataset Structure
### Data Fields
| Field | Type | Description |
|-------|------|-------------|
| `question` | `string` | The question text in Chinese |
| `choices` | `list[string]` | 4 multiple-choice answer options |
| `answer` | `int64` | Index of the correct answer |
| `answer_label` | `string` | Letter label of the correct answer |
| `split` | `string` | Dataset split identifier |
| `subcategories` | `string` | Perturbation category |
| `lang` | `string` | Language code |
| `second_lang` | `string` | English translation or description of the question |
| `notes` | `string` | Additional context about the question or perturbation |
| `id` | `string` | Unique question identifier |
| `set_id` | `float64` | Question set grouping identifier |
| `variation_id` | `float64` | Variation number within a question set |
| `vanilla_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity scores to canonical form (raw tokens) |
| `trimmed_cos_sim_to_canonical` | `dict[string, float]` | Cosine similarity scores after token normalization |
| `token_counts` | `dict[string, integer]` | Number of tokens produced per tokenizer |
## Dataset Creation
### Curation Rationale
This dataset was created to:
1. Systematically evaluate how different tokenization strategies handle Chinese
2. Measure robustness against real-world text perturbations specific to Chinese
3. Support research into the impact of tokenization on language model behavior
4. Provide standardized benchmarks for Chinese language models
The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
### Source Data
#### Data Collection and Processing
- **Canonical Questions**: 40 baseline questions created in English
- **Translation**: Native Chinese speakers translated questions
- **Perturbations**: Each question underwent targeted perturbations designed to reflect Chinese characteristics
- **Validation**: Model-in-the-loop process ensured high baseline accuracy
#### Perturbation Categories
1. **Canonical**
The baseline Chinese text written in standard, well-formed Simplified Chinese with no perturbations. This serves as the reference condition for evaluating the impact of all other perturbations.
2. **Code / Language / Script Switching**
Mixes Chinese with English words, phrases, or symbols within the same sentence, reflecting real-world bilingual usage and code-switching commonly seen in technical or online contexts.
3. **Colloquial**
Rewrites sentences using informal or conversational Chinese expressions, including spoken-style phrasing that differs from standard written Chinese while preserving meaning.
4. **Equivalent Expressions**
Replaces canonical phrases with alternative Chinese expressions that convey the same meaning using different words or constructions, isolating tokenizer sensitivity to paraphrasing.
5. **Keyboard Proximity Errors**
Introduces character-level errors caused by adjacent key presses in pinyin-based input methods, simulating realistic typing mistakes during Chinese text entry.
6. **OCR Errors**
Introduces character substitutions, deletions, or confusions commonly produced by optical character recognition systems, especially for visually similar Chinese characters.
7. **Optional Diacritics**
Adds or removes optional diacritic markers (e.g., tone marks in pinyin annotations when present), testing tokenizer robustness to auxiliary pronunciation cues.
8. **Partially Romanized**
Mixes Chinese characters with romanized (pinyin or Latin-script) representations for some words or phrases, reflecting hybrid writing styles used in informal digital text.
9. **Romanization**
Fully converts Chinese text into romanized form (e.g., pinyin), replacing characters with Latin-script equivalents while preserving pronunciation and meaning.
10. **Space Removal**
Removes spaces that may appear between Chinese characters or between Chinese and Latin text, stressing tokenizer assumptions about whitespace usage.
11. **Spelled-Out Forms**
Replaces numerals, symbols, or compact expressions with fully spelled-out Chinese equivalents, increasing sequence length and altering token boundaries.
12. **Traditional**
Converts Simplified Chinese characters into their Traditional Chinese counterparts, preserving semantics while changing Unicode character forms.
13. **Word Spacing, Zero-Width Characters, Extra Space**
Manipulates spacing by inserting extra spaces, removing expected spaces, or adding invisible zero-width characters, stressing tokenizer handling of segmentation and Unicode normalization.
#### Who are the source data producers?
Native Chinese speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
### Annotations
#### Annotation process
Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world Chinese text processing.
#### Who are the annotators?
Native Chinese speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
### Personal and Sensitive Information
The dataset contains only general knowledge questions and does not include any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to improving language technology for Chinese speakers by enabling better understanding of tokenization challenges and supporting more robust multilingual models.
### Discussion of Biases
- **Language variety** The dataset uses Standard Chinese (Mandarin) and may not fully represent regional or dialectal variations.
- **Script focus:** Simplified Chinese is used as the primary script; Traditional Chinese and romanized forms (pinyin) are included as perturbations.
- **Domain coverage:** Questions focus on general knowledge and may not represent domain-specific Chinese language use.
- **Question simplicity:** Designed for high baseline accuracy, which may not reflect real-world task complexity.
### Other Known Limitations
- Relatively small dataset size (evaluation-only)
- Multiple-choice format
- Language-specific perturbations
- Results may differ at larger model scales
## Additional Information
### Dataset Curators
The dataset was curated by the TokSuite research team at R3.
### Licensing Information
MIT license
### Citation Information
If you use this dataset in your research, please cite the TokSuite paper:
```bibtex
@inproceedings{toksuite2026,
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
booktitle={Preprint.},
year={2026},
arxiv={https://arxiv.org/abs/2512.20757},
url={TBD}
}
```
**Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](TBD)
### Contributions
This dataset is part of TokSuite, which includes:
- 14 language models with identical architectures but different tokenizers
- Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
- Comprehensive analysis of tokenization's impact on model behavior
### Contact
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
---
<div align="center">
**Part of the [TokSuite Project](TBD)**
*Understanding Tokenization's Role in Language Model Behavior*
</div> |