Malikeh1375 commited on
Commit
de44841
·
verified ·
1 Parent(s): f7f531a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -73
README.md CHANGED
@@ -314,141 +314,245 @@ dataset_info:
314
  download_size: 10010
315
  dataset_size: 12666
316
  ---
317
-
318
- # Dataset Card for Tokenization Robustness
319
-
320
  <!-- Provide a quick summary of the dataset. -->
321
 
322
- A comprehensive evaluation dataset for testing robustness of different tokenization strategies.
 
 
 
 
 
323
 
324
- ## Dataset Details
325
 
326
- ### Dataset Description
327
 
328
- <!-- Provide a longer summary of what this dataset is. -->
 
 
329
 
330
- This dataset evaluates how robust language models are to different tokenization strategies and edge cases. It includes text completion questions with multiple choice answers designed to test various aspects of tokenization handling.
331
 
332
- - **Curated by:** R3
333
- - **Funded by [optional]:** [More Information Needed]
334
- - **Shared by [optional]:** [More Information Needed]
335
- - **Language(s) (NLP):** [More Information Needed]
336
- - **License:** cc
337
 
338
- ### Dataset Sources [optional]
 
 
 
 
339
 
340
- <!-- Provide the basic links for the dataset. -->
341
 
342
- - **Repository:** [More Information Needed]
343
- - **Paper [optional]:** [More Information Needed]
344
- - **Demo [optional]:** [More Information Needed]
345
 
346
- ## Uses
347
 
348
- <!-- Address questions around how the dataset is intended to be used. -->
349
 
350
- ### Direct Use
351
 
352
- <!-- This section describes suitable use cases for the dataset. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
353
 
354
- [More Information Needed]
355
 
356
- ### Out-of-Scope Use
357
 
358
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
 
 
 
 
359
 
360
- [More Information Needed]
361
 
362
- ## Dataset Structure
363
 
364
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
365
 
366
- The dataset contains multiple-choice questions with associated metadata about tokenization types and categories.
 
 
 
367
 
368
- ## Dataset Creation
369
 
370
- ### Curation Rationale
 
 
 
 
371
 
372
- <!-- Motivation for the creation of this dataset. -->
 
 
373
 
374
- [More Information Needed]
 
 
 
375
 
376
- ### Source Data
377
-
378
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
379
 
380
- #### Data Collection and Processing
 
 
 
 
381
 
382
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
 
 
383
 
384
- [More Information Needed]
 
 
385
 
386
  #### Who are the source data producers?
387
 
388
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
389
 
390
- [More Information Needed]
391
 
392
- ### Annotations [optional]
393
 
394
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
395
 
396
- #### Annotation process
397
 
398
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
399
 
400
- [More Information Needed]
401
 
402
- #### Who are the annotators?
403
 
404
- <!-- This section describes the people or systems who created the annotations. -->
405
 
406
- [More Information Needed]
407
 
408
- #### Personal and Sensitive Information
 
 
 
409
 
410
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
411
 
412
- [More Information Needed]
 
 
 
413
 
414
- ## Bias, Risks, and Limitations
415
 
416
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
417
 
418
- The dataset focuses primarily on English text and may not generalize to other languages or tokenization schemes not covered in the evaluation.
419
 
420
- ### Recommendations
421
 
422
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
423
 
424
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
425
 
426
- ## Citation [optional]
427
 
428
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
429
 
430
- **BibTeX:**
431
 
432
- [More Information Needed]
 
 
 
 
 
 
 
 
433
 
434
- **APA:**
435
 
436
- [More Information Needed]
437
 
438
- ## Glossary [optional]
 
 
 
439
 
440
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
441
 
442
- [More Information Needed]
443
 
444
- ## More Information [optional]
 
 
 
 
 
445
 
446
- [More Information Needed]
447
 
448
- ## Dataset Card Authors [optional]
 
 
449
 
450
- [More Information Needed]
 
 
451
 
452
- ## Dataset Card Contact
453
 
454
- [More Information Needed]
 
314
  download_size: 10010
315
  dataset_size: 12666
316
  ---
 
 
 
317
  <!-- Provide a quick summary of the dataset. -->
318
 
319
+ <div align="center">
320
+ <img src="https://via.placeholder.com/800x200/4A90E2/FFFFFF?text=TokSuite" alt="TokSuite Logo" width="600"/>
321
+
322
+ # Farsi Tokenizer Robustness Dataset
323
+
324
+ </div>
325
 
326
+ ## Dataset Description
327
 
328
+ This dataset is part of **TokSuite**, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains Farsi (Persian) language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
329
 
330
+ - **Curated by:** R3 Research Team
331
+ - **Language(s):** Farsi/Persian (fa)
332
+ - **License:** Creative Commons
333
 
334
+ ### Dataset Summary
335
 
336
+ TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The Farsi subset specifically measures model performance on canonical questions and various perturbations including orthographic variations, diacritics, morphological challenges, and noise commonly encountered when processing Farsi text.
 
 
 
 
337
 
338
+ **Key Features:**
339
+ - 45 canonical questions covering general knowledge, geography, science, and language understanding
340
+ - Multiple perturbation types reflecting real-world text variations in Farsi
341
+ - Parallel structure with TokSuite benchmark (available in English, Turkish, Italian, Chinese)
342
+ - Native speaker curation ensuring linguistic authenticity
343
 
344
+ ### Supported Tasks
345
 
346
+ - **Multiple-Choice Question Answering**: Text completion format with 4 answer choices
347
+ - **Tokenizer Robustness Evaluation**: Measuring performance degradation under various text perturbations
348
+ - **Multilingual NLP Benchmarking**: Evaluating language models on Farsi text understanding
349
 
350
+ ### Languages
351
 
352
+ The dataset contains text in Farsi (Persian) written in Arabic script (language code: `pes_Arab` / `fa`).
353
 
354
+ ## Dataset Structure
355
 
356
+ ### Data Instances
357
+
358
+ An example from the dataset:
359
+
360
+ ```json
361
+ {
362
+ "question": "رنگ آسمان",
363
+ "choices": ["آبی است", "قرمز است", "سبز است", "زرد است"],
364
+ "answer": 0,
365
+ "answer_label": "A",
366
+ "split": "test",
367
+ "subcategories": "Cannonical",
368
+ "lang": "pes_Arab",
369
+ "second_lang": "The color of the sky is",
370
+ "coding_lang": "",
371
+ "notes": "The color of the sky is",
372
+ "id": "301",
373
+ "set_id": 301.0,
374
+ "variation_id": 1.0
375
+ }
376
+ ```
377
+
378
+ ### Data Fields
379
+
380
+ | Field | Type | Description |
381
+ |-------|------|-------------|
382
+ | `question` | `string` | The question text in Farsi (Persian Arabic script) |
383
+ | `choices` | `list[string]` | Four multiple-choice answer options in Farsi |
384
+ | `answer` | `int64` | Index of the correct answer (0-3) |
385
+ | `answer_label` | `string` | Letter label of the correct answer (A, B, C, or D) |
386
+ | `split` | `string` | Dataset split identifier (all entries are "test") |
387
+ | `subcategories` | `string` | Perturbation category (e.g., "Cannonical", "Diacritics", "Romanization", "Noise") |
388
+ | `lang` | `string` | Language code (pes_Arab = Persian/Farsi in Arabic script) |
389
+ | `second_lang` | `string` | English translation or description of the question |
390
+ | `coding_lang` | `string` | Not applicable for this dataset (empty string) |
391
+ | `notes` | `string` | Additional context about the question or perturbation type |
392
+ | `id` | `string` | Unique question identifier |
393
+ | `set_id` | `float64` | Question set grouping identifier (ranges from 300-344) |
394
+ | `variation_id` | `float64` | Variation number within a question set |
395
+
396
+ ### Data Splits
397
+
398
+ | Split | Number of Examples |
399
+ |-------|-------------------|
400
+ | test | 45 question sets with multiple variations |
401
+
402
+ All data is in the `test` split as this is an evaluation benchmark.
403
 
404
+ ## Dataset Creation
405
 
406
+ ### Curation Rationale
407
 
408
+ This dataset was created to:
409
+ 1. Systematically evaluate how different tokenization strategies handle Farsi text
410
+ 2. Measure robustness against real-world text perturbations specific to Persian
411
+ 3. Support research into tokenization's impact on language model behavior
412
+ 4. Provide standardized benchmarks for Farsi language models
413
 
414
+ The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
415
 
416
+ ### Source Data
417
 
418
+ #### Data Collection and Processing
419
 
420
+ - **Canonical Questions**: 40 baseline questions in English were created covering general knowledge topics
421
+ - **Translation**: Native Farsi speakers translated questions to Persian
422
+ - **Perturbations**: Each question underwent targeted perturbations designed to reflect morphological and orthographic characteristics of Farsi
423
+ - **Validation**: Model-in-the-loop process ensured high baseline accuracy across 14 different tokenizers
424
 
425
+ #### Perturbation Categories
426
 
427
+ 1. **Orthographic Perturbations**
428
+ - Script variations (traditional vs. simplified)
429
+ - Romanization (Finglish - Farsi in Latin script)
430
+ - Homoglyphs (visually similar characters with different Unicode)
431
+ - Zero-width characters and spacing irregularities
432
 
433
+ 2. **Diacritics**
434
+ - Optional short vowels (fatha /a/, kasra /e/, damma /o/)
435
+ - Common accent errors
436
 
437
+ 3. **Morphological Challenges**
438
+ - Contractions and compound words
439
+ - Inflectional variations
440
+ - Case marking and derivations
441
 
442
+ 4. **Input Medium Challenges**
443
+ - Non-native keyboard typing (e.g., typing Farsi with English keyboard)
444
+ - Systematic character substitutions
445
 
446
+ 5. **Noise**
447
+ - Typos and character-level errors
448
+ - OCR-like errors
449
+ - Character deletion/permutation
450
+ - Spacing inconsistencies
451
 
452
+ 6. **Linguistic Variety**
453
+ - Code-switching
454
+ - Dialectal variations
455
+ - Historical spelling variations
456
 
457
+ 7. **Structural Elements**
458
+ - Unicode-based formatting
459
+ - Stylistic variations
460
 
461
  #### Who are the source data producers?
462
 
463
+ Native Farsi speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
464
 
465
+ ### Annotations
466
 
467
+ #### Annotation process
468
 
469
+ Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world Farsi text processing.
470
 
471
+ #### Who are the annotators?
472
 
473
+ Native Farsi speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
474
 
475
+ ### Personal and Sensitive Information
476
 
477
+ The dataset contains only general knowledge questions and does not include any personal or sensitive information.
478
 
479
+ ## Considerations for Using the Data
480
 
481
+ ### Social Impact of Dataset
482
 
483
+ This dataset contributes to improving language technology for Farsi speakers by:
484
+ - Enabling better understanding of tokenization challenges in Persian
485
+ - Supporting development of more robust multilingual models
486
+ - Providing standardized evaluation for Farsi NLP research
487
 
488
+ ### Discussion of Biases
489
 
490
+ - **Language variety**: The dataset uses Modern Standard Persian and may not fully represent dialectal variations
491
+ - **Script focus**: Only Arabic script is used; romanized versions are included as perturbations
492
+ - **Domain coverage**: Questions focus on general knowledge and may not represent domain-specific language use
493
+ - **Question simplicity**: Designed for high baseline accuracy, which may not reflect real-world task complexity
494
 
495
+ ### Other Known Limitations
496
 
497
+ - Relatively small dataset size (designed for evaluation, not training)
498
+ - Focus on multiple-choice format may not capture all aspects of language understanding
499
+ - Perturbations are specific to Farsi's characteristics and findings may not generalize to all languages
500
+ - Models evaluated were trained at ~1B parameters; results may differ at larger scales
501
 
502
+ ## Additional Information
503
 
504
+ ### Dataset Curators
505
 
506
+ The dataset was curated by the TokSuite research team at R3.
507
 
508
+ ### Licensing Information
509
 
510
+ Creative Commons license (cc)
511
 
512
+ ### Citation Information
513
 
514
+ If you use this dataset in your research, please cite the TokSuite paper:
515
 
516
+ ```bibtex
517
+ @inproceedings{toksuite2026,
518
+ title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
519
+ author={Anonymous},
520
+ booktitle={Under review as a conference paper at ICLR 2026},
521
+ year={2026},
522
+ url={https://openreview.net/pdf?id=iExjy56t3o}
523
+ }
524
+ ```
525
 
526
+ **Paper**: [TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior](https://openreview.net/pdf?id=iExjy56t3o)
527
 
528
+ ### Contributions
529
 
530
+ This dataset is part of TokSuite, which includes:
531
+ - 14 language models with identical architectures but different tokenizers
532
+ - Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
533
+ - Comprehensive analysis of tokenization's impact on model behavior
534
 
535
+ ### Key Findings from TokSuite (Farsi-Specific)
536
 
537
+ Based on the TokSuite paper findings for Farsi:
538
 
539
+ - **Byte-level tokenizers** (like ByT5) demonstrate greater robustness to Farsi perturbations despite higher computational costs
540
+ - **Multilingual tokenizers** with insufficient Farsi representation show significant performance degradation
541
+ - **Diacritics and Unicode formatting** present challenges across nearly all tokenization strategies
542
+ - **Average subword fertility** for Farsi ranges from 1.36 to 7.74 depending on tokenizer choice
543
+ - **Performance drops** on perturbed Farsi text are notably higher (avg. 0.45) compared to English equivalents (avg. 0.11)
544
+ - **Noise perturbations** cause more severe degradation in Farsi (0.22) than in English (0.15)
545
 
546
+ ### Contact
547
 
548
+ For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors through the paper submission system.
549
+
550
+ ---
551
 
552
+ <div align="center">
553
+
554
+ **Part of the [TokSuite Project](https://openreview.net/pdf?id=iExjy56t3o)**
555
 
556
+ *Understanding Tokenization's Role in Language Model Behavior*
557
 
558
+ </div>